May 17 00:21:01.891215 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri May 16 22:44:56 -00 2025 May 17 00:21:01.891235 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:21:01.891243 kernel: BIOS-provided physical RAM map: May 17 00:21:01.891249 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable May 17 00:21:01.891254 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved May 17 00:21:01.891262 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 17 00:21:01.891269 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable May 17 00:21:01.891274 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved May 17 00:21:01.891280 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 17 00:21:01.891285 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 17 00:21:01.891290 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 17 00:21:01.891296 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 17 00:21:01.891301 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable May 17 00:21:01.891309 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 17 00:21:01.891315 kernel: NX (Execute Disable) protection: active May 17 00:21:01.891321 kernel: APIC: Static calls initialized May 17 00:21:01.891327 kernel: SMBIOS 2.8 present. May 17 00:21:01.891333 kernel: DMI: Linode Compute Instance, BIOS Not Specified May 17 00:21:01.891338 kernel: Hypervisor detected: KVM May 17 00:21:01.891346 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 17 00:21:01.891352 kernel: kvm-clock: using sched offset of 4487571610 cycles May 17 00:21:01.891358 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 17 00:21:01.891364 kernel: tsc: Detected 1999.999 MHz processor May 17 00:21:01.891370 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 17 00:21:01.891376 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 17 00:21:01.891382 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 May 17 00:21:01.891389 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 17 00:21:01.891394 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 17 00:21:01.891402 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 May 17 00:21:01.891408 kernel: Using GB pages for direct mapping May 17 00:21:01.891414 kernel: ACPI: Early table checksum verification disabled May 17 00:21:01.891420 kernel: ACPI: RSDP 0x00000000000F51B0 000014 (v00 BOCHS ) May 17 00:21:01.891426 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:21:01.891432 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:21:01.891438 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:21:01.891443 kernel: ACPI: FACS 0x000000007FFE0000 000040 May 17 00:21:01.891449 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:21:01.891457 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:21:01.891463 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:21:01.891469 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:21:01.891478 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] May 17 00:21:01.891485 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] May 17 00:21:01.891491 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] May 17 00:21:01.891499 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] May 17 00:21:01.891505 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] May 17 00:21:01.891511 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] May 17 00:21:01.891518 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] May 17 00:21:01.891524 kernel: No NUMA configuration found May 17 00:21:01.891530 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] May 17 00:21:01.891536 kernel: NODE_DATA(0) allocated [mem 0x17fff8000-0x17fffdfff] May 17 00:21:01.891542 kernel: Zone ranges: May 17 00:21:01.891550 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 17 00:21:01.891556 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 17 00:21:01.891563 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] May 17 00:21:01.891569 kernel: Movable zone start for each node May 17 00:21:01.891575 kernel: Early memory node ranges May 17 00:21:01.891581 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 17 00:21:01.891587 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] May 17 00:21:01.891593 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] May 17 00:21:01.891599 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] May 17 00:21:01.891605 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 17 00:21:01.891613 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 17 00:21:01.891620 kernel: On node 0, zone Normal: 35 pages in unavailable ranges May 17 00:21:01.891626 kernel: ACPI: PM-Timer IO Port: 0x608 May 17 00:21:01.891632 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 17 00:21:01.891638 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 17 00:21:01.891644 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 17 00:21:01.891650 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 17 00:21:01.891656 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 17 00:21:01.892686 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 17 00:21:01.892700 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 17 00:21:01.892707 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 17 00:21:01.892714 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 17 00:21:01.892720 kernel: TSC deadline timer available May 17 00:21:01.892726 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 17 00:21:01.892732 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 17 00:21:01.892738 kernel: kvm-guest: KVM setup pv remote TLB flush May 17 00:21:01.892745 kernel: kvm-guest: setup PV sched yield May 17 00:21:01.892751 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 17 00:21:01.892759 kernel: Booting paravirtualized kernel on KVM May 17 00:21:01.892765 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 17 00:21:01.892772 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 17 00:21:01.892778 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 May 17 00:21:01.892784 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 May 17 00:21:01.892790 kernel: pcpu-alloc: [0] 0 1 May 17 00:21:01.892796 kernel: kvm-guest: PV spinlocks enabled May 17 00:21:01.892803 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 17 00:21:01.892810 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:21:01.892819 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:21:01.892825 kernel: random: crng init done May 17 00:21:01.892831 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 17 00:21:01.892837 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:21:01.892843 kernel: Fallback order for Node 0: 0 May 17 00:21:01.892849 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 May 17 00:21:01.892856 kernel: Policy zone: Normal May 17 00:21:01.892862 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:21:01.892870 kernel: software IO TLB: area num 2. May 17 00:21:01.892876 kernel: Memory: 3966204K/4193772K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42872K init, 2320K bss, 227308K reserved, 0K cma-reserved) May 17 00:21:01.892882 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 17 00:21:01.892889 kernel: ftrace: allocating 37948 entries in 149 pages May 17 00:21:01.892895 kernel: ftrace: allocated 149 pages with 4 groups May 17 00:21:01.892901 kernel: Dynamic Preempt: voluntary May 17 00:21:01.892907 kernel: rcu: Preemptible hierarchical RCU implementation. May 17 00:21:01.893114 kernel: rcu: RCU event tracing is enabled. May 17 00:21:01.893121 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 17 00:21:01.893129 kernel: Trampoline variant of Tasks RCU enabled. May 17 00:21:01.893136 kernel: Rude variant of Tasks RCU enabled. May 17 00:21:01.893142 kernel: Tracing variant of Tasks RCU enabled. May 17 00:21:01.893148 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:21:01.893154 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 17 00:21:01.893160 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 17 00:21:01.893166 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 17 00:21:01.893172 kernel: Console: colour VGA+ 80x25 May 17 00:21:01.893179 kernel: printk: console [tty0] enabled May 17 00:21:01.893187 kernel: printk: console [ttyS0] enabled May 17 00:21:01.893193 kernel: ACPI: Core revision 20230628 May 17 00:21:01.893199 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 17 00:21:01.893206 kernel: APIC: Switch to symmetric I/O mode setup May 17 00:21:01.893219 kernel: x2apic enabled May 17 00:21:01.893228 kernel: APIC: Switched APIC routing to: physical x2apic May 17 00:21:01.893234 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 17 00:21:01.893241 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 17 00:21:01.893247 kernel: kvm-guest: setup PV IPIs May 17 00:21:01.893254 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 17 00:21:01.893260 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 17 00:21:01.893267 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999999) May 17 00:21:01.893275 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 17 00:21:01.893282 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 17 00:21:01.893288 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 17 00:21:01.893295 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 17 00:21:01.893301 kernel: Spectre V2 : Mitigation: Retpolines May 17 00:21:01.893310 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 17 00:21:01.893316 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls May 17 00:21:01.893323 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 17 00:21:01.893330 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 17 00:21:01.893336 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 17 00:21:01.893343 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 17 00:21:01.893350 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 17 00:21:01.893356 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 17 00:21:01.893365 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 17 00:21:01.893372 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 17 00:21:01.893378 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' May 17 00:21:01.893384 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 17 00:21:01.893391 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 May 17 00:21:01.893397 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. May 17 00:21:01.893404 kernel: Freeing SMP alternatives memory: 32K May 17 00:21:01.893410 kernel: pid_max: default: 32768 minimum: 301 May 17 00:21:01.893417 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 17 00:21:01.893426 kernel: landlock: Up and running. May 17 00:21:01.893432 kernel: SELinux: Initializing. May 17 00:21:01.893438 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:21:01.893445 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:21:01.893452 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) May 17 00:21:01.893458 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:21:01.893465 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:21:01.893471 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:21:01.893478 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 17 00:21:01.893486 kernel: ... version: 0 May 17 00:21:01.893493 kernel: ... bit width: 48 May 17 00:21:01.893499 kernel: ... generic registers: 6 May 17 00:21:01.893506 kernel: ... value mask: 0000ffffffffffff May 17 00:21:01.893512 kernel: ... max period: 00007fffffffffff May 17 00:21:01.893519 kernel: ... fixed-purpose events: 0 May 17 00:21:01.893525 kernel: ... event mask: 000000000000003f May 17 00:21:01.893531 kernel: signal: max sigframe size: 3376 May 17 00:21:01.893538 kernel: rcu: Hierarchical SRCU implementation. May 17 00:21:01.893546 kernel: rcu: Max phase no-delay instances is 400. May 17 00:21:01.893553 kernel: smp: Bringing up secondary CPUs ... May 17 00:21:01.893559 kernel: smpboot: x86: Booting SMP configuration: May 17 00:21:01.893566 kernel: .... node #0, CPUs: #1 May 17 00:21:01.893572 kernel: smp: Brought up 1 node, 2 CPUs May 17 00:21:01.893578 kernel: smpboot: Max logical packages: 1 May 17 00:21:01.893585 kernel: smpboot: Total of 2 processors activated (7999.99 BogoMIPS) May 17 00:21:01.893591 kernel: devtmpfs: initialized May 17 00:21:01.893598 kernel: x86/mm: Memory block size: 128MB May 17 00:21:01.893606 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:21:01.893613 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 17 00:21:01.893619 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:21:01.893626 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:21:01.893632 kernel: audit: initializing netlink subsys (disabled) May 17 00:21:01.893639 kernel: audit: type=2000 audit(1747441260.987:1): state=initialized audit_enabled=0 res=1 May 17 00:21:01.893645 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:21:01.893652 kernel: thermal_sys: Registered thermal governor 'user_space' May 17 00:21:01.893658 kernel: cpuidle: using governor menu May 17 00:21:01.894125 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:21:01.894133 kernel: dca service started, version 1.12.1 May 17 00:21:01.894140 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 17 00:21:01.894147 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 17 00:21:01.894153 kernel: PCI: Using configuration type 1 for base access May 17 00:21:01.894160 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 17 00:21:01.894166 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:21:01.894173 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 17 00:21:01.894179 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:21:01.894188 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 17 00:21:01.894195 kernel: ACPI: Added _OSI(Module Device) May 17 00:21:01.894202 kernel: ACPI: Added _OSI(Processor Device) May 17 00:21:01.894208 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:21:01.894214 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:21:01.894221 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 17 00:21:01.894227 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 17 00:21:01.894234 kernel: ACPI: Interpreter enabled May 17 00:21:01.894240 kernel: ACPI: PM: (supports S0 S3 S5) May 17 00:21:01.894249 kernel: ACPI: Using IOAPIC for interrupt routing May 17 00:21:01.894255 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 17 00:21:01.894262 kernel: PCI: Using E820 reservations for host bridge windows May 17 00:21:01.894268 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 17 00:21:01.894275 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 17 00:21:01.894444 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:21:01.894565 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 17 00:21:01.894751 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 17 00:21:01.894916 kernel: PCI host bridge to bus 0000:00 May 17 00:21:01.895067 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 17 00:21:01.895193 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 17 00:21:01.895296 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 17 00:21:01.895395 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] May 17 00:21:01.895495 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 17 00:21:01.895595 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] May 17 00:21:01.895729 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 17 00:21:01.895868 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 17 00:21:01.895990 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 17 00:21:01.896102 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] May 17 00:21:01.896211 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] May 17 00:21:01.896319 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] May 17 00:21:01.896449 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 17 00:21:01.896573 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 May 17 00:21:01.896709 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] May 17 00:21:01.896826 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] May 17 00:21:01.897087 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] May 17 00:21:01.897207 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 May 17 00:21:01.897319 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] May 17 00:21:01.897436 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] May 17 00:21:01.897546 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] May 17 00:21:01.897657 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] May 17 00:21:01.897801 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 17 00:21:01.898052 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 17 00:21:01.898169 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 17 00:21:01.898284 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] May 17 00:21:01.898393 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] May 17 00:21:01.898511 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 17 00:21:01.898621 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 17 00:21:01.898631 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 17 00:21:01.898637 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 17 00:21:01.898644 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 17 00:21:01.898650 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 17 00:21:01.898683 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 17 00:21:01.898691 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 17 00:21:01.898697 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 17 00:21:01.898704 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 17 00:21:01.898710 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 17 00:21:01.898717 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 17 00:21:01.898723 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 17 00:21:01.898730 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 17 00:21:01.898736 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 17 00:21:01.898745 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 17 00:21:01.898752 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 17 00:21:01.898758 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 17 00:21:01.898765 kernel: iommu: Default domain type: Translated May 17 00:21:01.898771 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 17 00:21:01.898778 kernel: PCI: Using ACPI for IRQ routing May 17 00:21:01.898784 kernel: PCI: pci_cache_line_size set to 64 bytes May 17 00:21:01.898790 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] May 17 00:21:01.898797 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] May 17 00:21:01.898918 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 17 00:21:01.899030 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 17 00:21:01.899139 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 17 00:21:01.899148 kernel: vgaarb: loaded May 17 00:21:01.899155 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 17 00:21:01.899161 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 17 00:21:01.899168 kernel: clocksource: Switched to clocksource kvm-clock May 17 00:21:01.899174 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:21:01.899184 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:21:01.899191 kernel: pnp: PnP ACPI init May 17 00:21:01.899317 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved May 17 00:21:01.899327 kernel: pnp: PnP ACPI: found 5 devices May 17 00:21:01.899334 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 17 00:21:01.899340 kernel: NET: Registered PF_INET protocol family May 17 00:21:01.899347 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 00:21:01.899354 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 17 00:21:01.899363 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:21:01.899370 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 17 00:21:01.899376 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 17 00:21:01.899383 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 17 00:21:01.899389 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:21:01.899396 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:21:01.899403 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:21:01.899409 kernel: NET: Registered PF_XDP protocol family May 17 00:21:01.899513 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 17 00:21:01.899618 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 17 00:21:01.900634 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 17 00:21:01.900761 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] May 17 00:21:01.900864 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 17 00:21:01.900964 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] May 17 00:21:01.900974 kernel: PCI: CLS 0 bytes, default 64 May 17 00:21:01.900981 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 17 00:21:01.900987 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) May 17 00:21:01.900994 kernel: Initialise system trusted keyrings May 17 00:21:01.901005 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 17 00:21:01.901012 kernel: Key type asymmetric registered May 17 00:21:01.901018 kernel: Asymmetric key parser 'x509' registered May 17 00:21:01.901025 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 17 00:21:01.901031 kernel: io scheduler mq-deadline registered May 17 00:21:01.901038 kernel: io scheduler kyber registered May 17 00:21:01.901044 kernel: io scheduler bfq registered May 17 00:21:01.901051 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 17 00:21:01.901058 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 17 00:21:01.901067 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 17 00:21:01.901073 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:21:01.901080 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 17 00:21:01.901087 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 17 00:21:01.901093 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 17 00:21:01.901100 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 17 00:21:01.901107 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 17 00:21:01.901222 kernel: rtc_cmos 00:03: RTC can wake from S4 May 17 00:21:01.901332 kernel: rtc_cmos 00:03: registered as rtc0 May 17 00:21:01.901436 kernel: rtc_cmos 00:03: setting system clock to 2025-05-17T00:21:01 UTC (1747441261) May 17 00:21:01.901539 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 17 00:21:01.901548 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 17 00:21:01.901555 kernel: NET: Registered PF_INET6 protocol family May 17 00:21:01.901561 kernel: Segment Routing with IPv6 May 17 00:21:01.901568 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:21:01.901574 kernel: NET: Registered PF_PACKET protocol family May 17 00:21:01.901581 kernel: Key type dns_resolver registered May 17 00:21:01.901591 kernel: IPI shorthand broadcast: enabled May 17 00:21:01.901597 kernel: sched_clock: Marking stable (663002443, 205819149)->(959596999, -90775407) May 17 00:21:01.901604 kernel: registered taskstats version 1 May 17 00:21:01.901610 kernel: Loading compiled-in X.509 certificates May 17 00:21:01.901617 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 85b8d1234ceca483cb3defc2030d93f7792663c9' May 17 00:21:01.901623 kernel: Key type .fscrypt registered May 17 00:21:01.901630 kernel: Key type fscrypt-provisioning registered May 17 00:21:01.901636 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:21:01.901645 kernel: ima: Allocated hash algorithm: sha1 May 17 00:21:01.901652 kernel: ima: No architecture policies found May 17 00:21:01.901658 kernel: clk: Disabling unused clocks May 17 00:21:01.902713 kernel: Freeing unused kernel image (initmem) memory: 42872K May 17 00:21:01.902721 kernel: Write protecting the kernel read-only data: 36864k May 17 00:21:01.902728 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 17 00:21:01.902735 kernel: Run /init as init process May 17 00:21:01.902741 kernel: with arguments: May 17 00:21:01.902748 kernel: /init May 17 00:21:01.902754 kernel: with environment: May 17 00:21:01.902764 kernel: HOME=/ May 17 00:21:01.902771 kernel: TERM=linux May 17 00:21:01.902777 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:21:01.902786 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:21:01.902795 systemd[1]: Detected virtualization kvm. May 17 00:21:01.902802 systemd[1]: Detected architecture x86-64. May 17 00:21:01.902809 systemd[1]: Running in initrd. May 17 00:21:01.902818 systemd[1]: No hostname configured, using default hostname. May 17 00:21:01.902825 systemd[1]: Hostname set to . May 17 00:21:01.902832 systemd[1]: Initializing machine ID from random generator. May 17 00:21:01.902839 systemd[1]: Queued start job for default target initrd.target. May 17 00:21:01.902846 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:21:01.902865 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:21:01.902878 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 17 00:21:01.902885 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:21:01.902892 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 17 00:21:01.902900 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 17 00:21:01.902908 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 17 00:21:01.902915 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 17 00:21:01.902923 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:21:01.902932 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:21:01.902939 systemd[1]: Reached target paths.target - Path Units. May 17 00:21:01.902946 systemd[1]: Reached target slices.target - Slice Units. May 17 00:21:01.902953 systemd[1]: Reached target swap.target - Swaps. May 17 00:21:01.902960 systemd[1]: Reached target timers.target - Timer Units. May 17 00:21:01.902967 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:21:01.902974 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:21:01.902981 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 17 00:21:01.902991 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 17 00:21:01.902998 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:21:01.903005 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:21:01.903012 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:21:01.903019 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:21:01.903026 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 17 00:21:01.903033 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:21:01.903040 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 17 00:21:01.903047 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:21:01.903056 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:21:01.903063 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:21:01.903070 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:21:01.903096 systemd-journald[176]: Collecting audit messages is disabled. May 17 00:21:01.903115 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 17 00:21:01.903122 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:21:01.903132 systemd-journald[176]: Journal started May 17 00:21:01.903150 systemd-journald[176]: Runtime Journal (/run/log/journal/a1ff447b69d54ed6918b633334dc6c6f) is 8.0M, max 78.3M, 70.3M free. May 17 00:21:01.904830 systemd-modules-load[177]: Inserted module 'overlay' May 17 00:21:01.907799 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:21:01.911709 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:21:01.928805 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:21:01.928788 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:21:01.974101 kernel: Bridge firewalling registered May 17 00:21:01.930092 systemd-modules-load[177]: Inserted module 'br_netfilter' May 17 00:21:01.978791 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:21:01.980326 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:21:01.981920 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:21:01.983149 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:21:01.987152 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:21:01.989793 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:21:01.996795 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:21:02.021058 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:21:02.023758 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:21:02.036793 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:21:02.037608 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:21:02.040342 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:21:02.042838 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 17 00:21:02.057005 dracut-cmdline[213]: dracut-dracut-053 May 17 00:21:02.060776 dracut-cmdline[213]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:21:02.066950 systemd-resolved[208]: Positive Trust Anchors: May 17 00:21:02.066961 systemd-resolved[208]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:21:02.066988 systemd-resolved[208]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:21:02.072763 systemd-resolved[208]: Defaulting to hostname 'linux'. May 17 00:21:02.073723 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:21:02.074565 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:21:02.129695 kernel: SCSI subsystem initialized May 17 00:21:02.138682 kernel: Loading iSCSI transport class v2.0-870. May 17 00:21:02.148683 kernel: iscsi: registered transport (tcp) May 17 00:21:02.166847 kernel: iscsi: registered transport (qla4xxx) May 17 00:21:02.166874 kernel: QLogic iSCSI HBA Driver May 17 00:21:02.207382 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 17 00:21:02.214867 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 17 00:21:02.237719 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:21:02.237777 kernel: device-mapper: uevent: version 1.0.3 May 17 00:21:02.239160 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 17 00:21:02.280685 kernel: raid6: avx2x4 gen() 31880 MB/s May 17 00:21:02.298685 kernel: raid6: avx2x2 gen() 30411 MB/s May 17 00:21:02.317323 kernel: raid6: avx2x1 gen() 22209 MB/s May 17 00:21:02.317341 kernel: raid6: using algorithm avx2x4 gen() 31880 MB/s May 17 00:21:02.336008 kernel: raid6: .... xor() 4645 MB/s, rmw enabled May 17 00:21:02.336024 kernel: raid6: using avx2x2 recovery algorithm May 17 00:21:02.355689 kernel: xor: automatically using best checksumming function avx May 17 00:21:02.481695 kernel: Btrfs loaded, zoned=no, fsverity=no May 17 00:21:02.493599 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 17 00:21:02.499796 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:21:02.512267 systemd-udevd[395]: Using default interface naming scheme 'v255'. May 17 00:21:02.516177 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:21:02.523975 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 17 00:21:02.539077 dracut-pre-trigger[401]: rd.md=0: removing MD RAID activation May 17 00:21:02.571011 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:21:02.575817 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:21:02.633678 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:21:02.641822 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 17 00:21:02.654529 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 17 00:21:02.656570 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:21:02.658260 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:21:02.660408 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:21:02.669839 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 17 00:21:02.680583 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 17 00:21:02.707179 kernel: cryptd: max_cpu_qlen set to 1000 May 17 00:21:02.718706 kernel: scsi host0: Virtio SCSI HBA May 17 00:21:02.727572 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 May 17 00:21:02.727905 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:21:02.727970 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:21:02.732636 kernel: AVX2 version of gcm_enc/dec engaged. May 17 00:21:02.733345 kernel: AES CTR mode by8 optimization enabled May 17 00:21:02.734716 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:21:02.736176 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:21:02.807974 kernel: libata version 3.00 loaded. May 17 00:21:02.736772 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:21:02.805186 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:21:02.816807 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:21:02.858677 kernel: ahci 0000:00:1f.2: version 3.0 May 17 00:21:02.858886 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 17 00:21:02.858901 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 17 00:21:02.859042 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 17 00:21:02.862700 kernel: scsi host1: ahci May 17 00:21:02.863797 kernel: scsi host2: ahci May 17 00:21:02.869686 kernel: scsi host3: ahci May 17 00:21:02.874175 kernel: scsi host4: ahci May 17 00:21:02.875178 kernel: scsi host5: ahci May 17 00:21:02.877780 kernel: scsi host6: ahci May 17 00:21:02.877943 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 May 17 00:21:02.877963 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 May 17 00:21:02.877973 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 May 17 00:21:02.877982 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 May 17 00:21:02.877992 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 May 17 00:21:02.878001 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 May 17 00:21:02.881755 kernel: sd 0:0:0:0: Power-on or device reset occurred May 17 00:21:02.881958 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) May 17 00:21:02.882103 kernel: sd 0:0:0:0: [sda] Write Protect is off May 17 00:21:02.882249 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 May 17 00:21:02.882388 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 17 00:21:02.885696 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 00:21:02.885718 kernel: GPT:9289727 != 167739391 May 17 00:21:02.885729 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 00:21:02.885738 kernel: GPT:9289727 != 167739391 May 17 00:21:02.885747 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 00:21:02.885756 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:21:02.885770 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 17 00:21:02.936676 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:21:02.942821 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:21:02.955590 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:21:03.194416 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 17 00:21:03.194454 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 17 00:21:03.194465 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 17 00:21:03.194475 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 17 00:21:03.194677 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 17 00:21:03.196689 kernel: ata3: SATA link down (SStatus 0 SControl 300) May 17 00:21:03.232459 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (466) May 17 00:21:03.232342 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. May 17 00:21:03.237692 kernel: BTRFS: device fsid 7f88d479-6686-439c-8052-b96f0a9d77bc devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (469) May 17 00:21:03.245745 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. May 17 00:21:03.250332 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. May 17 00:21:03.251737 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. May 17 00:21:03.256970 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 17 00:21:03.264826 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 17 00:21:03.269697 disk-uuid[569]: Primary Header is updated. May 17 00:21:03.269697 disk-uuid[569]: Secondary Entries is updated. May 17 00:21:03.269697 disk-uuid[569]: Secondary Header is updated. May 17 00:21:03.273684 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:21:03.279688 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:21:03.285690 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:21:04.287735 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:21:04.288279 disk-uuid[570]: The operation has completed successfully. May 17 00:21:04.331386 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:21:04.331517 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 17 00:21:04.345786 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 17 00:21:04.350273 sh[587]: Success May 17 00:21:04.362740 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 17 00:21:04.402567 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 17 00:21:04.415752 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 17 00:21:04.416495 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 17 00:21:04.442923 kernel: BTRFS info (device dm-0): first mount of filesystem 7f88d479-6686-439c-8052-b96f0a9d77bc May 17 00:21:04.442949 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 17 00:21:04.444868 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 17 00:21:04.446904 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 17 00:21:04.449415 kernel: BTRFS info (device dm-0): using free space tree May 17 00:21:04.456697 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 17 00:21:04.458055 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 17 00:21:04.458999 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 17 00:21:04.464771 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 17 00:21:04.466787 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 17 00:21:04.481345 kernel: BTRFS info (device sda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:21:04.481368 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:21:04.481379 kernel: BTRFS info (device sda6): using free space tree May 17 00:21:04.485816 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:21:04.485839 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:21:04.497634 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:21:04.499771 kernel: BTRFS info (device sda6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:21:04.505337 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 17 00:21:04.511815 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 17 00:21:04.580120 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:21:04.583022 ignition[695]: Ignition 2.19.0 May 17 00:21:04.583035 ignition[695]: Stage: fetch-offline May 17 00:21:04.583073 ignition[695]: no configs at "/usr/lib/ignition/base.d" May 17 00:21:04.583083 ignition[695]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 17 00:21:04.583165 ignition[695]: parsed url from cmdline: "" May 17 00:21:04.583169 ignition[695]: no config URL provided May 17 00:21:04.583173 ignition[695]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:21:04.583181 ignition[695]: no config at "/usr/lib/ignition/user.ign" May 17 00:21:04.583186 ignition[695]: failed to fetch config: resource requires networking May 17 00:21:04.583332 ignition[695]: Ignition finished successfully May 17 00:21:04.588838 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:21:04.589653 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:21:04.610093 systemd-networkd[772]: lo: Link UP May 17 00:21:04.610104 systemd-networkd[772]: lo: Gained carrier May 17 00:21:04.611782 systemd-networkd[772]: Enumeration completed May 17 00:21:04.612178 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:21:04.612182 systemd-networkd[772]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:21:04.613564 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:21:04.614833 systemd[1]: Reached target network.target - Network. May 17 00:21:04.615087 systemd-networkd[772]: eth0: Link UP May 17 00:21:04.615091 systemd-networkd[772]: eth0: Gained carrier May 17 00:21:04.615098 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:21:04.622801 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 17 00:21:04.633781 ignition[776]: Ignition 2.19.0 May 17 00:21:04.633793 ignition[776]: Stage: fetch May 17 00:21:04.633935 ignition[776]: no configs at "/usr/lib/ignition/base.d" May 17 00:21:04.633946 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 17 00:21:04.634019 ignition[776]: parsed url from cmdline: "" May 17 00:21:04.634023 ignition[776]: no config URL provided May 17 00:21:04.634028 ignition[776]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:21:04.634036 ignition[776]: no config at "/usr/lib/ignition/user.ign" May 17 00:21:04.634053 ignition[776]: PUT http://169.254.169.254/v1/token: attempt #1 May 17 00:21:04.634175 ignition[776]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable May 17 00:21:04.834761 ignition[776]: PUT http://169.254.169.254/v1/token: attempt #2 May 17 00:21:04.834883 ignition[776]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable May 17 00:21:05.056732 systemd-networkd[772]: eth0: DHCPv4 address 172.233.222.141/24, gateway 172.233.222.1 acquired from 23.210.200.20 May 17 00:21:05.235216 ignition[776]: PUT http://169.254.169.254/v1/token: attempt #3 May 17 00:21:05.327626 ignition[776]: PUT result: OK May 17 00:21:05.327700 ignition[776]: GET http://169.254.169.254/v1/user-data: attempt #1 May 17 00:21:05.438068 ignition[776]: GET result: OK May 17 00:21:05.438730 ignition[776]: parsing config with SHA512: a93d5bff1357d1c38e08d95d86d01d68f1f7d7c7d9af623c337711f828bba67d0c2f4916364520497c70d973e8fc9dbc2abaec23bbebbde743105ae6027afeeb May 17 00:21:05.442204 unknown[776]: fetched base config from "system" May 17 00:21:05.442214 unknown[776]: fetched base config from "system" May 17 00:21:05.442472 ignition[776]: fetch: fetch complete May 17 00:21:05.442220 unknown[776]: fetched user config from "akamai" May 17 00:21:05.442477 ignition[776]: fetch: fetch passed May 17 00:21:05.442515 ignition[776]: Ignition finished successfully May 17 00:21:05.445577 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 17 00:21:05.450782 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 17 00:21:05.464822 ignition[784]: Ignition 2.19.0 May 17 00:21:05.464831 ignition[784]: Stage: kargs May 17 00:21:05.464975 ignition[784]: no configs at "/usr/lib/ignition/base.d" May 17 00:21:05.464985 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 17 00:21:05.465592 ignition[784]: kargs: kargs passed May 17 00:21:05.467144 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 17 00:21:05.465628 ignition[784]: Ignition finished successfully May 17 00:21:05.475769 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 17 00:21:05.486009 ignition[791]: Ignition 2.19.0 May 17 00:21:05.486021 ignition[791]: Stage: disks May 17 00:21:05.486150 ignition[791]: no configs at "/usr/lib/ignition/base.d" May 17 00:21:05.488462 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 17 00:21:05.486160 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 17 00:21:05.489731 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 17 00:21:05.486804 ignition[791]: disks: disks passed May 17 00:21:05.490729 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 17 00:21:05.486839 ignition[791]: Ignition finished successfully May 17 00:21:05.513423 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:21:05.514591 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:21:05.515557 systemd[1]: Reached target basic.target - Basic System. May 17 00:21:05.522769 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 17 00:21:05.537166 systemd-fsck[799]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 17 00:21:05.540151 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 17 00:21:05.545748 systemd[1]: Mounting sysroot.mount - /sysroot... May 17 00:21:05.627698 kernel: EXT4-fs (sda9): mounted filesystem 278698a4-82b6-49b4-b6df-f7999ed4e35e r/w with ordered data mode. Quota mode: none. May 17 00:21:05.627789 systemd[1]: Mounted sysroot.mount - /sysroot. May 17 00:21:05.628860 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 17 00:21:05.634729 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:21:05.638745 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 17 00:21:05.639702 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 17 00:21:05.639739 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:21:05.639759 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:21:05.646707 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 17 00:21:05.654470 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (807) May 17 00:21:05.654486 kernel: BTRFS info (device sda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:21:05.654502 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:21:05.654511 kernel: BTRFS info (device sda6): using free space tree May 17 00:21:05.659962 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 17 00:21:05.664780 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:21:05.664796 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:21:05.666121 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:21:05.700866 initrd-setup-root[831]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:21:05.705207 initrd-setup-root[838]: cut: /sysroot/etc/group: No such file or directory May 17 00:21:05.709595 initrd-setup-root[845]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:21:05.714507 initrd-setup-root[852]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:21:05.796367 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 17 00:21:05.800763 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 17 00:21:05.804251 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 17 00:21:05.808755 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 17 00:21:05.811725 kernel: BTRFS info (device sda6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:21:05.832125 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 17 00:21:05.836748 ignition[920]: INFO : Ignition 2.19.0 May 17 00:21:05.836748 ignition[920]: INFO : Stage: mount May 17 00:21:05.839064 ignition[920]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:21:05.839064 ignition[920]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 17 00:21:05.839064 ignition[920]: INFO : mount: mount passed May 17 00:21:05.839064 ignition[920]: INFO : Ignition finished successfully May 17 00:21:05.842204 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 17 00:21:05.846956 systemd[1]: Starting ignition-files.service - Ignition (files)... May 17 00:21:06.246929 systemd-networkd[772]: eth0: Gained IPv6LL May 17 00:21:06.633801 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:21:06.645718 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (931) May 17 00:21:06.649958 kernel: BTRFS info (device sda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:21:06.649975 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:21:06.649985 kernel: BTRFS info (device sda6): using free space tree May 17 00:21:06.656677 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:21:06.656742 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:21:06.659486 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:21:06.679514 ignition[948]: INFO : Ignition 2.19.0 May 17 00:21:06.679514 ignition[948]: INFO : Stage: files May 17 00:21:06.680861 ignition[948]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:21:06.680861 ignition[948]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 17 00:21:06.680861 ignition[948]: DEBUG : files: compiled without relabeling support, skipping May 17 00:21:06.683084 ignition[948]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:21:06.683084 ignition[948]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:21:06.684734 ignition[948]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:21:06.685773 ignition[948]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:21:06.685773 ignition[948]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:21:06.685226 unknown[948]: wrote ssh authorized keys file for user: core May 17 00:21:06.688000 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" May 17 00:21:06.688000 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 May 17 00:21:06.898507 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 17 00:21:07.140460 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" May 17 00:21:07.141738 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 17 00:21:07.141738 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:21:07.141738 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:21:07.141738 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:21:07.141738 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:21:07.141738 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:21:07.141738 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:21:07.141738 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:21:07.141738 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:21:07.141738 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:21:07.150184 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 17 00:21:07.150184 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 17 00:21:07.150184 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 17 00:21:07.150184 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 May 17 00:21:07.642192 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 17 00:21:07.981163 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 17 00:21:07.981163 ignition[948]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 17 00:21:07.983546 ignition[948]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:21:07.984539 ignition[948]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:21:07.984539 ignition[948]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 17 00:21:07.984539 ignition[948]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 17 00:21:07.984539 ignition[948]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 17 00:21:07.984539 ignition[948]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 17 00:21:07.984539 ignition[948]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 17 00:21:07.984539 ignition[948]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" May 17 00:21:07.984539 ignition[948]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:21:07.984539 ignition[948]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:21:07.984539 ignition[948]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:21:07.984539 ignition[948]: INFO : files: files passed May 17 00:21:08.015415 ignition[948]: INFO : Ignition finished successfully May 17 00:21:07.987389 systemd[1]: Finished ignition-files.service - Ignition (files). May 17 00:21:08.015780 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 17 00:21:08.018862 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 17 00:21:08.020641 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:21:08.020787 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 17 00:21:08.032544 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:21:08.032544 initrd-setup-root-after-ignition[976]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 17 00:21:08.034351 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:21:08.036289 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:21:08.037379 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 17 00:21:08.042826 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 17 00:21:08.073211 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:21:08.073346 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 17 00:21:08.075062 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 17 00:21:08.079746 systemd[1]: Reached target initrd.target - Initrd Default Target. May 17 00:21:08.080391 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 17 00:21:08.085804 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 17 00:21:08.098278 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:21:08.104799 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 17 00:21:08.115097 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 17 00:21:08.115846 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:21:08.117134 systemd[1]: Stopped target timers.target - Timer Units. May 17 00:21:08.118864 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:21:08.118980 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:21:08.120690 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 17 00:21:08.121463 systemd[1]: Stopped target basic.target - Basic System. May 17 00:21:08.122502 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 17 00:21:08.123548 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:21:08.124781 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 17 00:21:08.125996 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 17 00:21:08.127236 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:21:08.128483 systemd[1]: Stopped target sysinit.target - System Initialization. May 17 00:21:08.129715 systemd[1]: Stopped target local-fs.target - Local File Systems. May 17 00:21:08.130886 systemd[1]: Stopped target swap.target - Swaps. May 17 00:21:08.131893 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:21:08.132006 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 17 00:21:08.133257 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 17 00:21:08.134045 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:21:08.135234 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 17 00:21:08.137201 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:21:08.137865 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:21:08.137968 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 17 00:21:08.139517 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:21:08.139627 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:21:08.140410 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:21:08.140533 systemd[1]: Stopped ignition-files.service - Ignition (files). May 17 00:21:08.147075 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 17 00:21:08.149843 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 17 00:21:08.150461 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:21:08.150607 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:21:08.152768 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:21:08.153232 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:21:08.162995 ignition[1000]: INFO : Ignition 2.19.0 May 17 00:21:08.162995 ignition[1000]: INFO : Stage: umount May 17 00:21:08.165417 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:21:08.165417 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 17 00:21:08.164958 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:21:08.168767 ignition[1000]: INFO : umount: umount passed May 17 00:21:08.168767 ignition[1000]: INFO : Ignition finished successfully May 17 00:21:08.165058 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 17 00:21:08.170723 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:21:08.170829 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 17 00:21:08.173296 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:21:08.173375 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 17 00:21:08.175842 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:21:08.175891 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 17 00:21:08.177639 systemd[1]: ignition-fetch.service: Deactivated successfully. May 17 00:21:08.177707 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 17 00:21:08.178453 systemd[1]: Stopped target network.target - Network. May 17 00:21:08.179526 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:21:08.179576 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:21:08.180754 systemd[1]: Stopped target paths.target - Path Units. May 17 00:21:08.181746 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:21:08.183140 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:21:08.183929 systemd[1]: Stopped target slices.target - Slice Units. May 17 00:21:08.184402 systemd[1]: Stopped target sockets.target - Socket Units. May 17 00:21:08.210422 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:21:08.210491 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:21:08.214056 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:21:08.214104 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:21:08.214634 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:21:08.214712 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 17 00:21:08.215250 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 17 00:21:08.215296 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 17 00:21:08.216049 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 17 00:21:08.217304 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 17 00:21:08.219333 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:21:08.219723 systemd-networkd[772]: eth0: DHCPv6 lease lost May 17 00:21:08.220942 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:21:08.221070 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 17 00:21:08.222521 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:21:08.222630 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 17 00:21:08.224374 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:21:08.224431 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 17 00:21:08.226408 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:21:08.226473 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 17 00:21:08.237739 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 17 00:21:08.238264 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:21:08.238317 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:21:08.239003 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:21:08.242104 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:21:08.242218 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 17 00:21:08.250622 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:21:08.250711 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 00:21:08.252064 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:21:08.252111 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 17 00:21:08.254972 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 17 00:21:08.255020 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:21:08.256698 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:21:08.257086 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:21:08.258085 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:21:08.258181 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 17 00:21:08.259697 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:21:08.259765 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 17 00:21:08.261045 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:21:08.261082 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:21:08.262067 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:21:08.262117 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 17 00:21:08.263619 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:21:08.263683 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 17 00:21:08.264817 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:21:08.264866 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:21:08.271817 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 17 00:21:08.273104 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:21:08.273157 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:21:08.274556 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 17 00:21:08.274608 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:21:08.275729 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:21:08.275779 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:21:08.277408 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:21:08.277454 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:21:08.278634 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:21:08.278772 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 17 00:21:08.280119 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 17 00:21:08.290228 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 17 00:21:08.296338 systemd[1]: Switching root. May 17 00:21:08.330383 systemd-journald[176]: Journal stopped May 17 00:21:01.891215 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri May 16 22:44:56 -00 2025 May 17 00:21:01.891235 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:21:01.891243 kernel: BIOS-provided physical RAM map: May 17 00:21:01.891249 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable May 17 00:21:01.891254 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved May 17 00:21:01.891262 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 17 00:21:01.891269 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable May 17 00:21:01.891274 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved May 17 00:21:01.891280 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 17 00:21:01.891285 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 17 00:21:01.891290 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 17 00:21:01.891296 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 17 00:21:01.891301 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable May 17 00:21:01.891309 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 17 00:21:01.891315 kernel: NX (Execute Disable) protection: active May 17 00:21:01.891321 kernel: APIC: Static calls initialized May 17 00:21:01.891327 kernel: SMBIOS 2.8 present. May 17 00:21:01.891333 kernel: DMI: Linode Compute Instance, BIOS Not Specified May 17 00:21:01.891338 kernel: Hypervisor detected: KVM May 17 00:21:01.891346 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 17 00:21:01.891352 kernel: kvm-clock: using sched offset of 4487571610 cycles May 17 00:21:01.891358 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 17 00:21:01.891364 kernel: tsc: Detected 1999.999 MHz processor May 17 00:21:01.891370 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 17 00:21:01.891376 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 17 00:21:01.891382 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 May 17 00:21:01.891389 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 17 00:21:01.891394 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 17 00:21:01.891402 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 May 17 00:21:01.891408 kernel: Using GB pages for direct mapping May 17 00:21:01.891414 kernel: ACPI: Early table checksum verification disabled May 17 00:21:01.891420 kernel: ACPI: RSDP 0x00000000000F51B0 000014 (v00 BOCHS ) May 17 00:21:01.891426 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:21:01.891432 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:21:01.891438 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:21:01.891443 kernel: ACPI: FACS 0x000000007FFE0000 000040 May 17 00:21:01.891449 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:21:01.891457 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:21:01.891463 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:21:01.891469 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:21:01.891478 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] May 17 00:21:01.891485 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] May 17 00:21:01.891491 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] May 17 00:21:01.891499 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] May 17 00:21:01.891505 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] May 17 00:21:01.891511 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] May 17 00:21:01.891518 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] May 17 00:21:01.891524 kernel: No NUMA configuration found May 17 00:21:01.891530 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] May 17 00:21:01.891536 kernel: NODE_DATA(0) allocated [mem 0x17fff8000-0x17fffdfff] May 17 00:21:01.891542 kernel: Zone ranges: May 17 00:21:01.891550 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 17 00:21:01.891556 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 17 00:21:01.891563 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] May 17 00:21:01.891569 kernel: Movable zone start for each node May 17 00:21:01.891575 kernel: Early memory node ranges May 17 00:21:01.891581 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 17 00:21:01.891587 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] May 17 00:21:01.891593 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] May 17 00:21:01.891599 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] May 17 00:21:01.891605 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 17 00:21:01.891613 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 17 00:21:01.891620 kernel: On node 0, zone Normal: 35 pages in unavailable ranges May 17 00:21:01.891626 kernel: ACPI: PM-Timer IO Port: 0x608 May 17 00:21:01.891632 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 17 00:21:01.891638 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 17 00:21:01.891644 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 17 00:21:01.891650 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 17 00:21:01.891656 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 17 00:21:01.892686 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 17 00:21:01.892700 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 17 00:21:01.892707 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 17 00:21:01.892714 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 17 00:21:01.892720 kernel: TSC deadline timer available May 17 00:21:01.892726 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 17 00:21:01.892732 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 17 00:21:01.892738 kernel: kvm-guest: KVM setup pv remote TLB flush May 17 00:21:01.892745 kernel: kvm-guest: setup PV sched yield May 17 00:21:01.892751 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 17 00:21:01.892759 kernel: Booting paravirtualized kernel on KVM May 17 00:21:01.892765 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 17 00:21:01.892772 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 17 00:21:01.892778 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 May 17 00:21:01.892784 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 May 17 00:21:01.892790 kernel: pcpu-alloc: [0] 0 1 May 17 00:21:01.892796 kernel: kvm-guest: PV spinlocks enabled May 17 00:21:01.892803 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 17 00:21:01.892810 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:21:01.892819 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:21:01.892825 kernel: random: crng init done May 17 00:21:01.892831 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 17 00:21:01.892837 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:21:01.892843 kernel: Fallback order for Node 0: 0 May 17 00:21:01.892849 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 May 17 00:21:01.892856 kernel: Policy zone: Normal May 17 00:21:01.892862 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:21:01.892870 kernel: software IO TLB: area num 2. May 17 00:21:01.892876 kernel: Memory: 3966204K/4193772K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42872K init, 2320K bss, 227308K reserved, 0K cma-reserved) May 17 00:21:01.892882 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 17 00:21:01.892889 kernel: ftrace: allocating 37948 entries in 149 pages May 17 00:21:01.892895 kernel: ftrace: allocated 149 pages with 4 groups May 17 00:21:01.892901 kernel: Dynamic Preempt: voluntary May 17 00:21:01.892907 kernel: rcu: Preemptible hierarchical RCU implementation. May 17 00:21:01.893114 kernel: rcu: RCU event tracing is enabled. May 17 00:21:01.893121 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 17 00:21:01.893129 kernel: Trampoline variant of Tasks RCU enabled. May 17 00:21:01.893136 kernel: Rude variant of Tasks RCU enabled. May 17 00:21:01.893142 kernel: Tracing variant of Tasks RCU enabled. May 17 00:21:01.893148 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:21:01.893154 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 17 00:21:01.893160 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 17 00:21:01.893166 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 17 00:21:01.893172 kernel: Console: colour VGA+ 80x25 May 17 00:21:01.893179 kernel: printk: console [tty0] enabled May 17 00:21:01.893187 kernel: printk: console [ttyS0] enabled May 17 00:21:01.893193 kernel: ACPI: Core revision 20230628 May 17 00:21:01.893199 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 17 00:21:01.893206 kernel: APIC: Switch to symmetric I/O mode setup May 17 00:21:01.893219 kernel: x2apic enabled May 17 00:21:01.893228 kernel: APIC: Switched APIC routing to: physical x2apic May 17 00:21:01.893234 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 17 00:21:01.893241 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 17 00:21:01.893247 kernel: kvm-guest: setup PV IPIs May 17 00:21:01.893254 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 17 00:21:01.893260 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 17 00:21:01.893267 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999999) May 17 00:21:01.893275 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 17 00:21:01.893282 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 17 00:21:01.893288 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 17 00:21:01.893295 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 17 00:21:01.893301 kernel: Spectre V2 : Mitigation: Retpolines May 17 00:21:01.893310 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 17 00:21:01.893316 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls May 17 00:21:01.893323 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 17 00:21:01.893330 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 17 00:21:01.893336 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 17 00:21:01.893343 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 17 00:21:01.893350 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 17 00:21:01.893356 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 17 00:21:01.893365 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 17 00:21:01.893372 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 17 00:21:01.893378 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' May 17 00:21:01.893384 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 17 00:21:01.893391 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 May 17 00:21:01.893397 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. May 17 00:21:01.893404 kernel: Freeing SMP alternatives memory: 32K May 17 00:21:01.893410 kernel: pid_max: default: 32768 minimum: 301 May 17 00:21:01.893417 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 17 00:21:01.893426 kernel: landlock: Up and running. May 17 00:21:01.893432 kernel: SELinux: Initializing. May 17 00:21:01.893438 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:21:01.893445 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:21:01.893452 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) May 17 00:21:01.893458 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:21:01.893465 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:21:01.893471 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:21:01.893478 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 17 00:21:01.893486 kernel: ... version: 0 May 17 00:21:01.893493 kernel: ... bit width: 48 May 17 00:21:01.893499 kernel: ... generic registers: 6 May 17 00:21:01.893506 kernel: ... value mask: 0000ffffffffffff May 17 00:21:01.893512 kernel: ... max period: 00007fffffffffff May 17 00:21:01.893519 kernel: ... fixed-purpose events: 0 May 17 00:21:01.893525 kernel: ... event mask: 000000000000003f May 17 00:21:01.893531 kernel: signal: max sigframe size: 3376 May 17 00:21:01.893538 kernel: rcu: Hierarchical SRCU implementation. May 17 00:21:01.893546 kernel: rcu: Max phase no-delay instances is 400. May 17 00:21:01.893553 kernel: smp: Bringing up secondary CPUs ... May 17 00:21:01.893559 kernel: smpboot: x86: Booting SMP configuration: May 17 00:21:01.893566 kernel: .... node #0, CPUs: #1 May 17 00:21:01.893572 kernel: smp: Brought up 1 node, 2 CPUs May 17 00:21:01.893578 kernel: smpboot: Max logical packages: 1 May 17 00:21:01.893585 kernel: smpboot: Total of 2 processors activated (7999.99 BogoMIPS) May 17 00:21:01.893591 kernel: devtmpfs: initialized May 17 00:21:01.893598 kernel: x86/mm: Memory block size: 128MB May 17 00:21:01.893606 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:21:01.893613 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 17 00:21:01.893619 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:21:01.893626 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:21:01.893632 kernel: audit: initializing netlink subsys (disabled) May 17 00:21:01.893639 kernel: audit: type=2000 audit(1747441260.987:1): state=initialized audit_enabled=0 res=1 May 17 00:21:01.893645 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:21:01.893652 kernel: thermal_sys: Registered thermal governor 'user_space' May 17 00:21:01.893658 kernel: cpuidle: using governor menu May 17 00:21:01.894125 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:21:01.894133 kernel: dca service started, version 1.12.1 May 17 00:21:01.894140 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 17 00:21:01.894147 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 17 00:21:01.894153 kernel: PCI: Using configuration type 1 for base access May 17 00:21:01.894160 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 17 00:21:01.894166 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:21:01.894173 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 17 00:21:01.894179 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:21:01.894188 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 17 00:21:01.894195 kernel: ACPI: Added _OSI(Module Device) May 17 00:21:01.894202 kernel: ACPI: Added _OSI(Processor Device) May 17 00:21:01.894208 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:21:01.894214 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:21:01.894221 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 17 00:21:01.894227 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 17 00:21:01.894234 kernel: ACPI: Interpreter enabled May 17 00:21:01.894240 kernel: ACPI: PM: (supports S0 S3 S5) May 17 00:21:01.894249 kernel: ACPI: Using IOAPIC for interrupt routing May 17 00:21:01.894255 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 17 00:21:01.894262 kernel: PCI: Using E820 reservations for host bridge windows May 17 00:21:01.894268 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 17 00:21:01.894275 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 17 00:21:01.894444 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:21:01.894565 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 17 00:21:01.894751 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 17 00:21:01.894916 kernel: PCI host bridge to bus 0000:00 May 17 00:21:01.895067 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 17 00:21:01.895193 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 17 00:21:01.895296 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 17 00:21:01.895395 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] May 17 00:21:01.895495 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 17 00:21:01.895595 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] May 17 00:21:01.895729 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 17 00:21:01.895868 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 17 00:21:01.895990 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 17 00:21:01.896102 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] May 17 00:21:01.896211 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] May 17 00:21:01.896319 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] May 17 00:21:01.896449 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 17 00:21:01.896573 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 May 17 00:21:01.896709 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] May 17 00:21:01.896826 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] May 17 00:21:01.897087 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] May 17 00:21:01.897207 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 May 17 00:21:01.897319 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] May 17 00:21:01.897436 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] May 17 00:21:01.897546 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] May 17 00:21:01.897657 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] May 17 00:21:01.897801 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 17 00:21:01.898052 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 17 00:21:01.898169 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 17 00:21:01.898284 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] May 17 00:21:01.898393 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] May 17 00:21:01.898511 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 17 00:21:01.898621 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 17 00:21:01.898631 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 17 00:21:01.898637 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 17 00:21:01.898644 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 17 00:21:01.898650 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 17 00:21:01.898683 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 17 00:21:01.898691 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 17 00:21:01.898697 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 17 00:21:01.898704 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 17 00:21:01.898710 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 17 00:21:01.898717 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 17 00:21:01.898723 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 17 00:21:01.898730 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 17 00:21:01.898736 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 17 00:21:01.898745 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 17 00:21:01.898752 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 17 00:21:01.898758 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 17 00:21:01.898765 kernel: iommu: Default domain type: Translated May 17 00:21:01.898771 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 17 00:21:01.898778 kernel: PCI: Using ACPI for IRQ routing May 17 00:21:01.898784 kernel: PCI: pci_cache_line_size set to 64 bytes May 17 00:21:01.898790 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] May 17 00:21:01.898797 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] May 17 00:21:01.898918 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 17 00:21:01.899030 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 17 00:21:01.899139 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 17 00:21:01.899148 kernel: vgaarb: loaded May 17 00:21:01.899155 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 17 00:21:01.899161 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 17 00:21:01.899168 kernel: clocksource: Switched to clocksource kvm-clock May 17 00:21:01.899174 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:21:01.899184 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:21:01.899191 kernel: pnp: PnP ACPI init May 17 00:21:01.899317 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved May 17 00:21:01.899327 kernel: pnp: PnP ACPI: found 5 devices May 17 00:21:01.899334 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 17 00:21:01.899340 kernel: NET: Registered PF_INET protocol family May 17 00:21:01.899347 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 00:21:01.899354 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 17 00:21:01.899363 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:21:01.899370 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 17 00:21:01.899376 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 17 00:21:01.899383 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 17 00:21:01.899389 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:21:01.899396 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:21:01.899403 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:21:01.899409 kernel: NET: Registered PF_XDP protocol family May 17 00:21:01.899513 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 17 00:21:01.899618 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 17 00:21:01.900634 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 17 00:21:01.900761 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] May 17 00:21:01.900864 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 17 00:21:01.900964 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] May 17 00:21:01.900974 kernel: PCI: CLS 0 bytes, default 64 May 17 00:21:01.900981 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 17 00:21:01.900987 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) May 17 00:21:01.900994 kernel: Initialise system trusted keyrings May 17 00:21:01.901005 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 17 00:21:01.901012 kernel: Key type asymmetric registered May 17 00:21:01.901018 kernel: Asymmetric key parser 'x509' registered May 17 00:21:01.901025 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 17 00:21:01.901031 kernel: io scheduler mq-deadline registered May 17 00:21:01.901038 kernel: io scheduler kyber registered May 17 00:21:01.901044 kernel: io scheduler bfq registered May 17 00:21:01.901051 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 17 00:21:01.901058 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 17 00:21:01.901067 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 17 00:21:01.901073 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:21:01.901080 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 17 00:21:01.901087 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 17 00:21:01.901093 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 17 00:21:01.901100 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 17 00:21:01.901107 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 17 00:21:01.901222 kernel: rtc_cmos 00:03: RTC can wake from S4 May 17 00:21:01.901332 kernel: rtc_cmos 00:03: registered as rtc0 May 17 00:21:01.901436 kernel: rtc_cmos 00:03: setting system clock to 2025-05-17T00:21:01 UTC (1747441261) May 17 00:21:01.901539 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 17 00:21:01.901548 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 17 00:21:01.901555 kernel: NET: Registered PF_INET6 protocol family May 17 00:21:01.901561 kernel: Segment Routing with IPv6 May 17 00:21:01.901568 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:21:01.901574 kernel: NET: Registered PF_PACKET protocol family May 17 00:21:01.901581 kernel: Key type dns_resolver registered May 17 00:21:01.901591 kernel: IPI shorthand broadcast: enabled May 17 00:21:01.901597 kernel: sched_clock: Marking stable (663002443, 205819149)->(959596999, -90775407) May 17 00:21:01.901604 kernel: registered taskstats version 1 May 17 00:21:01.901610 kernel: Loading compiled-in X.509 certificates May 17 00:21:01.901617 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 85b8d1234ceca483cb3defc2030d93f7792663c9' May 17 00:21:01.901623 kernel: Key type .fscrypt registered May 17 00:21:01.901630 kernel: Key type fscrypt-provisioning registered May 17 00:21:01.901636 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:21:01.901645 kernel: ima: Allocated hash algorithm: sha1 May 17 00:21:01.901652 kernel: ima: No architecture policies found May 17 00:21:01.901658 kernel: clk: Disabling unused clocks May 17 00:21:01.902713 kernel: Freeing unused kernel image (initmem) memory: 42872K May 17 00:21:01.902721 kernel: Write protecting the kernel read-only data: 36864k May 17 00:21:01.902728 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 17 00:21:01.902735 kernel: Run /init as init process May 17 00:21:01.902741 kernel: with arguments: May 17 00:21:01.902748 kernel: /init May 17 00:21:01.902754 kernel: with environment: May 17 00:21:01.902764 kernel: HOME=/ May 17 00:21:01.902771 kernel: TERM=linux May 17 00:21:01.902777 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:21:01.902786 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:21:01.902795 systemd[1]: Detected virtualization kvm. May 17 00:21:01.902802 systemd[1]: Detected architecture x86-64. May 17 00:21:01.902809 systemd[1]: Running in initrd. May 17 00:21:01.902818 systemd[1]: No hostname configured, using default hostname. May 17 00:21:01.902825 systemd[1]: Hostname set to . May 17 00:21:01.902832 systemd[1]: Initializing machine ID from random generator. May 17 00:21:01.902839 systemd[1]: Queued start job for default target initrd.target. May 17 00:21:01.902846 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:21:01.902865 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:21:01.902878 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 17 00:21:01.902885 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:21:01.902892 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 17 00:21:01.902900 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 17 00:21:01.902908 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 17 00:21:01.902915 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 17 00:21:01.902923 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:21:01.902932 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:21:01.902939 systemd[1]: Reached target paths.target - Path Units. May 17 00:21:01.902946 systemd[1]: Reached target slices.target - Slice Units. May 17 00:21:01.902953 systemd[1]: Reached target swap.target - Swaps. May 17 00:21:01.902960 systemd[1]: Reached target timers.target - Timer Units. May 17 00:21:01.902967 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:21:01.902974 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:21:01.902981 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 17 00:21:01.902991 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 17 00:21:01.902998 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:21:01.903005 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:21:01.903012 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:21:01.903019 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:21:01.903026 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 17 00:21:01.903033 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:21:01.903040 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 17 00:21:01.903047 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:21:01.903056 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:21:01.903063 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:21:01.903070 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:21:01.903096 systemd-journald[176]: Collecting audit messages is disabled. May 17 00:21:01.903115 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 17 00:21:01.903122 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:21:01.903132 systemd-journald[176]: Journal started May 17 00:21:01.903150 systemd-journald[176]: Runtime Journal (/run/log/journal/a1ff447b69d54ed6918b633334dc6c6f) is 8.0M, max 78.3M, 70.3M free. May 17 00:21:01.904830 systemd-modules-load[177]: Inserted module 'overlay' May 17 00:21:01.907799 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:21:01.911709 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:21:01.928805 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:21:01.928788 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:21:01.974101 kernel: Bridge firewalling registered May 17 00:21:01.930092 systemd-modules-load[177]: Inserted module 'br_netfilter' May 17 00:21:01.978791 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:21:01.980326 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:21:01.981920 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:21:01.983149 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:21:01.987152 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:21:01.989793 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:21:01.996795 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:21:02.021058 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:21:02.023758 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:21:02.036793 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:21:02.037608 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:21:02.040342 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:21:02.042838 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 17 00:21:02.057005 dracut-cmdline[213]: dracut-dracut-053 May 17 00:21:02.060776 dracut-cmdline[213]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:21:02.066950 systemd-resolved[208]: Positive Trust Anchors: May 17 00:21:02.066961 systemd-resolved[208]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:21:02.066988 systemd-resolved[208]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:21:02.072763 systemd-resolved[208]: Defaulting to hostname 'linux'. May 17 00:21:02.073723 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:21:02.074565 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:21:02.129695 kernel: SCSI subsystem initialized May 17 00:21:02.138682 kernel: Loading iSCSI transport class v2.0-870. May 17 00:21:02.148683 kernel: iscsi: registered transport (tcp) May 17 00:21:02.166847 kernel: iscsi: registered transport (qla4xxx) May 17 00:21:02.166874 kernel: QLogic iSCSI HBA Driver May 17 00:21:02.207382 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 17 00:21:02.214867 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 17 00:21:02.237719 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:21:02.237777 kernel: device-mapper: uevent: version 1.0.3 May 17 00:21:02.239160 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 17 00:21:02.280685 kernel: raid6: avx2x4 gen() 31880 MB/s May 17 00:21:02.298685 kernel: raid6: avx2x2 gen() 30411 MB/s May 17 00:21:02.317323 kernel: raid6: avx2x1 gen() 22209 MB/s May 17 00:21:02.317341 kernel: raid6: using algorithm avx2x4 gen() 31880 MB/s May 17 00:21:02.336008 kernel: raid6: .... xor() 4645 MB/s, rmw enabled May 17 00:21:02.336024 kernel: raid6: using avx2x2 recovery algorithm May 17 00:21:02.355689 kernel: xor: automatically using best checksumming function avx May 17 00:21:02.481695 kernel: Btrfs loaded, zoned=no, fsverity=no May 17 00:21:02.493599 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 17 00:21:02.499796 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:21:02.512267 systemd-udevd[395]: Using default interface naming scheme 'v255'. May 17 00:21:02.516177 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:21:02.523975 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 17 00:21:02.539077 dracut-pre-trigger[401]: rd.md=0: removing MD RAID activation May 17 00:21:02.571011 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:21:02.575817 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:21:02.633678 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:21:02.641822 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 17 00:21:02.654529 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 17 00:21:02.656570 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:21:02.658260 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:21:02.660408 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:21:02.669839 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 17 00:21:02.680583 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 17 00:21:02.707179 kernel: cryptd: max_cpu_qlen set to 1000 May 17 00:21:02.718706 kernel: scsi host0: Virtio SCSI HBA May 17 00:21:02.727572 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 May 17 00:21:02.727905 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:21:02.727970 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:21:02.732636 kernel: AVX2 version of gcm_enc/dec engaged. May 17 00:21:02.733345 kernel: AES CTR mode by8 optimization enabled May 17 00:21:02.734716 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:21:02.736176 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:21:02.807974 kernel: libata version 3.00 loaded. May 17 00:21:02.736772 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:21:02.805186 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:21:02.816807 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:21:02.858677 kernel: ahci 0000:00:1f.2: version 3.0 May 17 00:21:02.858886 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 17 00:21:02.858901 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 17 00:21:02.859042 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 17 00:21:02.862700 kernel: scsi host1: ahci May 17 00:21:02.863797 kernel: scsi host2: ahci May 17 00:21:02.869686 kernel: scsi host3: ahci May 17 00:21:02.874175 kernel: scsi host4: ahci May 17 00:21:02.875178 kernel: scsi host5: ahci May 17 00:21:02.877780 kernel: scsi host6: ahci May 17 00:21:02.877943 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 May 17 00:21:02.877963 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 May 17 00:21:02.877973 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 May 17 00:21:02.877982 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 May 17 00:21:02.877992 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 May 17 00:21:02.878001 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 May 17 00:21:02.881755 kernel: sd 0:0:0:0: Power-on or device reset occurred May 17 00:21:02.881958 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) May 17 00:21:02.882103 kernel: sd 0:0:0:0: [sda] Write Protect is off May 17 00:21:02.882249 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 May 17 00:21:02.882388 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 17 00:21:02.885696 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 00:21:02.885718 kernel: GPT:9289727 != 167739391 May 17 00:21:02.885729 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 00:21:02.885738 kernel: GPT:9289727 != 167739391 May 17 00:21:02.885747 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 00:21:02.885756 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:21:02.885770 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 17 00:21:02.936676 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:21:02.942821 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:21:02.955590 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:21:03.194416 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 17 00:21:03.194454 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 17 00:21:03.194465 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 17 00:21:03.194475 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 17 00:21:03.194677 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 17 00:21:03.196689 kernel: ata3: SATA link down (SStatus 0 SControl 300) May 17 00:21:03.232459 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (466) May 17 00:21:03.232342 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. May 17 00:21:03.237692 kernel: BTRFS: device fsid 7f88d479-6686-439c-8052-b96f0a9d77bc devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (469) May 17 00:21:03.245745 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. May 17 00:21:03.250332 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. May 17 00:21:03.251737 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. May 17 00:21:03.256970 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 17 00:21:03.264826 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 17 00:21:03.269697 disk-uuid[569]: Primary Header is updated. May 17 00:21:03.269697 disk-uuid[569]: Secondary Entries is updated. May 17 00:21:03.269697 disk-uuid[569]: Secondary Header is updated. May 17 00:21:03.273684 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:21:03.279688 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:21:03.285690 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:21:04.287735 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:21:04.288279 disk-uuid[570]: The operation has completed successfully. May 17 00:21:04.331386 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:21:04.331517 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 17 00:21:04.345786 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 17 00:21:04.350273 sh[587]: Success May 17 00:21:04.362740 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 17 00:21:04.402567 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 17 00:21:04.415752 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 17 00:21:04.416495 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 17 00:21:04.442923 kernel: BTRFS info (device dm-0): first mount of filesystem 7f88d479-6686-439c-8052-b96f0a9d77bc May 17 00:21:04.442949 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 17 00:21:04.444868 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 17 00:21:04.446904 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 17 00:21:04.449415 kernel: BTRFS info (device dm-0): using free space tree May 17 00:21:04.456697 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 17 00:21:04.458055 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 17 00:21:04.458999 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 17 00:21:04.464771 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 17 00:21:04.466787 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 17 00:21:04.481345 kernel: BTRFS info (device sda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:21:04.481368 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:21:04.481379 kernel: BTRFS info (device sda6): using free space tree May 17 00:21:04.485816 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:21:04.485839 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:21:04.497634 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:21:04.499771 kernel: BTRFS info (device sda6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:21:04.505337 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 17 00:21:04.511815 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 17 00:21:04.580120 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:21:04.583022 ignition[695]: Ignition 2.19.0 May 17 00:21:04.583035 ignition[695]: Stage: fetch-offline May 17 00:21:04.583073 ignition[695]: no configs at "/usr/lib/ignition/base.d" May 17 00:21:04.583083 ignition[695]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 17 00:21:04.583165 ignition[695]: parsed url from cmdline: "" May 17 00:21:04.583169 ignition[695]: no config URL provided May 17 00:21:04.583173 ignition[695]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:21:04.583181 ignition[695]: no config at "/usr/lib/ignition/user.ign" May 17 00:21:04.583186 ignition[695]: failed to fetch config: resource requires networking May 17 00:21:04.583332 ignition[695]: Ignition finished successfully May 17 00:21:04.588838 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:21:04.589653 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:21:04.610093 systemd-networkd[772]: lo: Link UP May 17 00:21:04.610104 systemd-networkd[772]: lo: Gained carrier May 17 00:21:04.611782 systemd-networkd[772]: Enumeration completed May 17 00:21:04.612178 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:21:04.612182 systemd-networkd[772]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:21:04.613564 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:21:04.614833 systemd[1]: Reached target network.target - Network. May 17 00:21:04.615087 systemd-networkd[772]: eth0: Link UP May 17 00:21:04.615091 systemd-networkd[772]: eth0: Gained carrier May 17 00:21:04.615098 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:21:04.622801 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 17 00:21:04.633781 ignition[776]: Ignition 2.19.0 May 17 00:21:04.633793 ignition[776]: Stage: fetch May 17 00:21:04.633935 ignition[776]: no configs at "/usr/lib/ignition/base.d" May 17 00:21:04.633946 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 17 00:21:04.634019 ignition[776]: parsed url from cmdline: "" May 17 00:21:04.634023 ignition[776]: no config URL provided May 17 00:21:04.634028 ignition[776]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:21:04.634036 ignition[776]: no config at "/usr/lib/ignition/user.ign" May 17 00:21:04.634053 ignition[776]: PUT http://169.254.169.254/v1/token: attempt #1 May 17 00:21:04.634175 ignition[776]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable May 17 00:21:04.834761 ignition[776]: PUT http://169.254.169.254/v1/token: attempt #2 May 17 00:21:04.834883 ignition[776]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable May 17 00:21:05.056732 systemd-networkd[772]: eth0: DHCPv4 address 172.233.222.141/24, gateway 172.233.222.1 acquired from 23.210.200.20 May 17 00:21:05.235216 ignition[776]: PUT http://169.254.169.254/v1/token: attempt #3 May 17 00:21:05.327626 ignition[776]: PUT result: OK May 17 00:21:05.327700 ignition[776]: GET http://169.254.169.254/v1/user-data: attempt #1 May 17 00:21:05.438068 ignition[776]: GET result: OK May 17 00:21:05.438730 ignition[776]: parsing config with SHA512: a93d5bff1357d1c38e08d95d86d01d68f1f7d7c7d9af623c337711f828bba67d0c2f4916364520497c70d973e8fc9dbc2abaec23bbebbde743105ae6027afeeb May 17 00:21:05.442204 unknown[776]: fetched base config from "system" May 17 00:21:05.442214 unknown[776]: fetched base config from "system" May 17 00:21:05.442472 ignition[776]: fetch: fetch complete May 17 00:21:05.442220 unknown[776]: fetched user config from "akamai" May 17 00:21:05.442477 ignition[776]: fetch: fetch passed May 17 00:21:05.442515 ignition[776]: Ignition finished successfully May 17 00:21:05.445577 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 17 00:21:05.450782 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 17 00:21:05.464822 ignition[784]: Ignition 2.19.0 May 17 00:21:05.464831 ignition[784]: Stage: kargs May 17 00:21:05.464975 ignition[784]: no configs at "/usr/lib/ignition/base.d" May 17 00:21:05.464985 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 17 00:21:05.465592 ignition[784]: kargs: kargs passed May 17 00:21:05.467144 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 17 00:21:05.465628 ignition[784]: Ignition finished successfully May 17 00:21:05.475769 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 17 00:21:05.486009 ignition[791]: Ignition 2.19.0 May 17 00:21:05.486021 ignition[791]: Stage: disks May 17 00:21:05.486150 ignition[791]: no configs at "/usr/lib/ignition/base.d" May 17 00:21:05.488462 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 17 00:21:05.486160 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 17 00:21:05.489731 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 17 00:21:05.486804 ignition[791]: disks: disks passed May 17 00:21:05.490729 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 17 00:21:05.486839 ignition[791]: Ignition finished successfully May 17 00:21:05.513423 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:21:05.514591 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:21:05.515557 systemd[1]: Reached target basic.target - Basic System. May 17 00:21:05.522769 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 17 00:21:05.537166 systemd-fsck[799]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 17 00:21:05.540151 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 17 00:21:05.545748 systemd[1]: Mounting sysroot.mount - /sysroot... May 17 00:21:05.627698 kernel: EXT4-fs (sda9): mounted filesystem 278698a4-82b6-49b4-b6df-f7999ed4e35e r/w with ordered data mode. Quota mode: none. May 17 00:21:05.627789 systemd[1]: Mounted sysroot.mount - /sysroot. May 17 00:21:05.628860 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 17 00:21:05.634729 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:21:05.638745 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 17 00:21:05.639702 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 17 00:21:05.639739 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:21:05.639759 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:21:05.646707 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 17 00:21:05.654470 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (807) May 17 00:21:05.654486 kernel: BTRFS info (device sda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:21:05.654502 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:21:05.654511 kernel: BTRFS info (device sda6): using free space tree May 17 00:21:05.659962 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 17 00:21:05.664780 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:21:05.664796 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:21:05.666121 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:21:05.700866 initrd-setup-root[831]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:21:05.705207 initrd-setup-root[838]: cut: /sysroot/etc/group: No such file or directory May 17 00:21:05.709595 initrd-setup-root[845]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:21:05.714507 initrd-setup-root[852]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:21:05.796367 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 17 00:21:05.800763 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 17 00:21:05.804251 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 17 00:21:05.808755 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 17 00:21:05.811725 kernel: BTRFS info (device sda6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:21:05.832125 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 17 00:21:05.836748 ignition[920]: INFO : Ignition 2.19.0 May 17 00:21:05.836748 ignition[920]: INFO : Stage: mount May 17 00:21:05.839064 ignition[920]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:21:05.839064 ignition[920]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 17 00:21:05.839064 ignition[920]: INFO : mount: mount passed May 17 00:21:05.839064 ignition[920]: INFO : Ignition finished successfully May 17 00:21:05.842204 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 17 00:21:05.846956 systemd[1]: Starting ignition-files.service - Ignition (files)... May 17 00:21:06.246929 systemd-networkd[772]: eth0: Gained IPv6LL May 17 00:21:06.633801 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:21:06.645718 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (931) May 17 00:21:06.649958 kernel: BTRFS info (device sda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:21:06.649975 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:21:06.649985 kernel: BTRFS info (device sda6): using free space tree May 17 00:21:06.656677 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:21:06.656742 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:21:06.659486 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:21:06.679514 ignition[948]: INFO : Ignition 2.19.0 May 17 00:21:06.679514 ignition[948]: INFO : Stage: files May 17 00:21:06.680861 ignition[948]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:21:06.680861 ignition[948]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 17 00:21:06.680861 ignition[948]: DEBUG : files: compiled without relabeling support, skipping May 17 00:21:06.683084 ignition[948]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:21:06.683084 ignition[948]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:21:06.684734 ignition[948]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:21:06.685773 ignition[948]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:21:06.685773 ignition[948]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:21:06.685226 unknown[948]: wrote ssh authorized keys file for user: core May 17 00:21:06.688000 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" May 17 00:21:06.688000 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 May 17 00:21:06.898507 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 17 00:21:07.140460 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" May 17 00:21:07.141738 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 17 00:21:07.141738 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:21:07.141738 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:21:07.141738 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:21:07.141738 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:21:07.141738 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:21:07.141738 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:21:07.141738 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:21:07.141738 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:21:07.141738 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:21:07.150184 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 17 00:21:07.150184 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 17 00:21:07.150184 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 17 00:21:07.150184 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 May 17 00:21:07.642192 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 17 00:21:07.981163 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 17 00:21:07.981163 ignition[948]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 17 00:21:07.983546 ignition[948]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:21:07.984539 ignition[948]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:21:07.984539 ignition[948]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 17 00:21:07.984539 ignition[948]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 17 00:21:07.984539 ignition[948]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 17 00:21:07.984539 ignition[948]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 17 00:21:07.984539 ignition[948]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 17 00:21:07.984539 ignition[948]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" May 17 00:21:07.984539 ignition[948]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:21:07.984539 ignition[948]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:21:07.984539 ignition[948]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:21:07.984539 ignition[948]: INFO : files: files passed May 17 00:21:08.015415 ignition[948]: INFO : Ignition finished successfully May 17 00:21:07.987389 systemd[1]: Finished ignition-files.service - Ignition (files). May 17 00:21:08.015780 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 17 00:21:08.018862 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 17 00:21:08.020641 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:21:08.020787 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 17 00:21:08.032544 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:21:08.032544 initrd-setup-root-after-ignition[976]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 17 00:21:08.034351 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:21:08.036289 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:21:08.037379 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 17 00:21:08.042826 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 17 00:21:08.073211 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:21:08.073346 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 17 00:21:08.075062 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 17 00:21:08.079746 systemd[1]: Reached target initrd.target - Initrd Default Target. May 17 00:21:08.080391 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 17 00:21:08.085804 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 17 00:21:08.098278 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:21:08.104799 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 17 00:21:08.115097 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 17 00:21:08.115846 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:21:08.117134 systemd[1]: Stopped target timers.target - Timer Units. May 17 00:21:08.118864 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:21:08.118980 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:21:08.120690 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 17 00:21:08.121463 systemd[1]: Stopped target basic.target - Basic System. May 17 00:21:08.122502 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 17 00:21:08.123548 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:21:08.124781 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 17 00:21:08.125996 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 17 00:21:08.127236 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:21:08.128483 systemd[1]: Stopped target sysinit.target - System Initialization. May 17 00:21:08.129715 systemd[1]: Stopped target local-fs.target - Local File Systems. May 17 00:21:08.130886 systemd[1]: Stopped target swap.target - Swaps. May 17 00:21:08.131893 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:21:08.132006 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 17 00:21:08.133257 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 17 00:21:08.134045 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:21:08.135234 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 17 00:21:08.137201 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:21:08.137865 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:21:08.137968 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 17 00:21:08.139517 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:21:08.139627 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:21:08.140410 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:21:08.140533 systemd[1]: Stopped ignition-files.service - Ignition (files). May 17 00:21:08.147075 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 17 00:21:08.149843 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 17 00:21:08.150461 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:21:08.150607 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:21:08.152768 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:21:08.153232 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:21:08.162995 ignition[1000]: INFO : Ignition 2.19.0 May 17 00:21:08.162995 ignition[1000]: INFO : Stage: umount May 17 00:21:08.165417 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:21:08.165417 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 17 00:21:08.164958 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:21:08.168767 ignition[1000]: INFO : umount: umount passed May 17 00:21:08.168767 ignition[1000]: INFO : Ignition finished successfully May 17 00:21:08.165058 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 17 00:21:08.170723 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:21:08.170829 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 17 00:21:08.173296 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:21:08.173375 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 17 00:21:08.175842 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:21:08.175891 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 17 00:21:08.177639 systemd[1]: ignition-fetch.service: Deactivated successfully. May 17 00:21:08.177707 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 17 00:21:08.178453 systemd[1]: Stopped target network.target - Network. May 17 00:21:08.179526 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:21:08.179576 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:21:08.180754 systemd[1]: Stopped target paths.target - Path Units. May 17 00:21:08.181746 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:21:08.183140 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:21:08.183929 systemd[1]: Stopped target slices.target - Slice Units. May 17 00:21:08.184402 systemd[1]: Stopped target sockets.target - Socket Units. May 17 00:21:08.210422 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:21:08.210491 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:21:08.214056 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:21:08.214104 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:21:08.214634 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:21:08.214712 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 17 00:21:08.215250 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 17 00:21:08.215296 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 17 00:21:08.216049 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 17 00:21:08.217304 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 17 00:21:08.219333 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:21:08.219723 systemd-networkd[772]: eth0: DHCPv6 lease lost May 17 00:21:08.220942 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:21:08.221070 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 17 00:21:08.222521 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:21:08.222630 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 17 00:21:08.224374 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:21:08.224431 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 17 00:21:08.226408 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:21:08.226473 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 17 00:21:08.237739 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 17 00:21:08.238264 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:21:08.238317 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:21:08.239003 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:21:08.242104 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:21:08.242218 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 17 00:21:08.250622 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:21:08.250711 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 00:21:08.252064 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:21:08.252111 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 17 00:21:08.254972 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 17 00:21:08.255020 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:21:08.256698 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:21:08.257086 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:21:08.258085 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:21:08.258181 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 17 00:21:08.259697 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:21:08.259765 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 17 00:21:08.261045 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:21:08.261082 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:21:08.262067 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:21:08.262117 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 17 00:21:08.263619 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:21:08.263683 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 17 00:21:08.264817 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:21:08.264866 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:21:08.271817 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 17 00:21:08.273104 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:21:08.273157 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:21:08.274556 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 17 00:21:08.274608 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:21:08.275729 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:21:08.275779 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:21:08.277408 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:21:08.277454 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:21:08.278634 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:21:08.278772 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 17 00:21:08.280119 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 17 00:21:08.290228 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 17 00:21:08.296338 systemd[1]: Switching root. May 17 00:21:08.330383 systemd-journald[176]: Journal stopped May 17 00:21:09.269979 systemd-journald[176]: Received SIGTERM from PID 1 (systemd). May 17 00:21:09.270001 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:21:09.270010 kernel: SELinux: policy capability open_perms=1 May 17 00:21:09.270018 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:21:09.270028 kernel: SELinux: policy capability always_check_network=0 May 17 00:21:09.270035 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:21:09.270042 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:21:09.270050 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:21:09.270057 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:21:09.270064 kernel: audit: type=1403 audit(1747441268.447:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 00:21:09.270072 systemd[1]: Successfully loaded SELinux policy in 45.726ms. May 17 00:21:09.270082 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.950ms. May 17 00:21:09.270091 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:21:09.270099 systemd[1]: Detected virtualization kvm. May 17 00:21:09.270107 systemd[1]: Detected architecture x86-64. May 17 00:21:09.270115 systemd[1]: Detected first boot. May 17 00:21:09.270125 systemd[1]: Initializing machine ID from random generator. May 17 00:21:09.270132 zram_generator::config[1043]: No configuration found. May 17 00:21:09.270141 systemd[1]: Populated /etc with preset unit settings. May 17 00:21:09.270148 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 17 00:21:09.270156 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 17 00:21:09.270164 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 17 00:21:09.270172 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 17 00:21:09.270182 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 17 00:21:09.270190 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 17 00:21:09.270198 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 17 00:21:09.270206 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 17 00:21:09.270214 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 17 00:21:09.270221 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 17 00:21:09.270232 systemd[1]: Created slice user.slice - User and Session Slice. May 17 00:21:09.270242 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:21:09.270251 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:21:09.270259 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 17 00:21:09.270267 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 17 00:21:09.270275 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 17 00:21:09.270284 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:21:09.270291 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 17 00:21:09.270299 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:21:09.270310 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 17 00:21:09.270318 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 17 00:21:09.270329 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 17 00:21:09.270337 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 17 00:21:09.270345 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:21:09.270353 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:21:09.270361 systemd[1]: Reached target slices.target - Slice Units. May 17 00:21:09.270369 systemd[1]: Reached target swap.target - Swaps. May 17 00:21:09.270380 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 17 00:21:09.270388 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 17 00:21:09.270396 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:21:09.270404 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:21:09.270413 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:21:09.270424 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 17 00:21:09.270432 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 17 00:21:09.270440 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 17 00:21:09.270448 systemd[1]: Mounting media.mount - External Media Directory... May 17 00:21:09.270456 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:21:09.270465 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 17 00:21:09.270473 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 17 00:21:09.270481 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 17 00:21:09.270491 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:21:09.270499 systemd[1]: Reached target machines.target - Containers. May 17 00:21:09.270508 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 17 00:21:09.270517 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:21:09.270525 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:21:09.270533 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 17 00:21:09.270541 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:21:09.270549 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:21:09.270560 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:21:09.270568 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 17 00:21:09.270576 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:21:09.270584 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:21:09.270594 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 17 00:21:09.270602 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 17 00:21:09.270610 kernel: fuse: init (API version 7.39) May 17 00:21:09.270618 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 17 00:21:09.270628 systemd[1]: Stopped systemd-fsck-usr.service. May 17 00:21:09.270636 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:21:09.270644 kernel: ACPI: bus type drm_connector registered May 17 00:21:09.270652 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:21:09.270678 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 17 00:21:09.270687 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 17 00:21:09.270696 kernel: loop: module loaded May 17 00:21:09.270704 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:21:09.270712 systemd[1]: verity-setup.service: Deactivated successfully. May 17 00:21:09.270737 systemd-journald[1126]: Collecting audit messages is disabled. May 17 00:21:09.270754 systemd[1]: Stopped verity-setup.service. May 17 00:21:09.270763 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:21:09.270771 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 17 00:21:09.270782 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 17 00:21:09.270790 systemd-journald[1126]: Journal started May 17 00:21:09.270806 systemd-journald[1126]: Runtime Journal (/run/log/journal/e2ae41aecc574ca48d1a76d8eed0d6cb) is 8.0M, max 78.3M, 70.3M free. May 17 00:21:08.949194 systemd[1]: Queued start job for default target multi-user.target. May 17 00:21:08.963276 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 17 00:21:08.963828 systemd[1]: systemd-journald.service: Deactivated successfully. May 17 00:21:09.275714 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:21:09.275986 systemd[1]: Mounted media.mount - External Media Directory. May 17 00:21:09.276635 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 17 00:21:09.277337 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 17 00:21:09.278018 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 17 00:21:09.278894 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 17 00:21:09.279856 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:21:09.280809 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:21:09.281020 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 17 00:21:09.282167 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:21:09.282345 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:21:09.283491 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:21:09.283735 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:21:09.284637 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:21:09.285016 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:21:09.286007 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:21:09.286239 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 17 00:21:09.287276 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:21:09.287500 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:21:09.288449 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:21:09.289528 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 17 00:21:09.290483 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 17 00:21:09.307454 systemd[1]: Reached target network-pre.target - Preparation for Network. May 17 00:21:09.315748 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 17 00:21:09.346750 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 17 00:21:09.347801 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:21:09.347884 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:21:09.349289 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 17 00:21:09.353897 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 17 00:21:09.355972 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 17 00:21:09.356616 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:21:09.373837 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 17 00:21:09.377420 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 17 00:21:09.377970 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:21:09.381258 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 17 00:21:09.383872 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:21:09.389801 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:21:09.392855 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 17 00:21:09.395362 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:21:09.399757 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:21:09.400640 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 17 00:21:09.401915 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 17 00:21:09.415591 systemd-journald[1126]: Time spent on flushing to /var/log/journal/e2ae41aecc574ca48d1a76d8eed0d6cb is 51.948ms for 976 entries. May 17 00:21:09.415591 systemd-journald[1126]: System Journal (/var/log/journal/e2ae41aecc574ca48d1a76d8eed0d6cb) is 8.0M, max 195.6M, 187.6M free. May 17 00:21:09.490724 systemd-journald[1126]: Received client request to flush runtime journal. May 17 00:21:09.491276 kernel: loop0: detected capacity change from 0 to 142488 May 17 00:21:09.491302 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:21:09.407822 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 17 00:21:09.421831 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 17 00:21:09.441286 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 17 00:21:09.442724 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 17 00:21:09.451383 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 17 00:21:09.468001 udevadm[1169]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 17 00:21:09.481881 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:21:09.500176 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 17 00:21:09.502368 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:21:09.505481 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 17 00:21:09.506086 systemd-tmpfiles[1164]: ACLs are not supported, ignoring. May 17 00:21:09.506102 systemd-tmpfiles[1164]: ACLs are not supported, ignoring. May 17 00:21:09.513678 kernel: loop1: detected capacity change from 0 to 8 May 17 00:21:09.521402 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:21:09.531812 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 17 00:21:09.536690 kernel: loop2: detected capacity change from 0 to 229808 May 17 00:21:09.582000 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 17 00:21:09.586833 kernel: loop3: detected capacity change from 0 to 140768 May 17 00:21:09.589961 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:21:09.614268 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. May 17 00:21:09.614282 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. May 17 00:21:09.618504 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:21:09.631713 kernel: loop4: detected capacity change from 0 to 142488 May 17 00:21:09.657543 kernel: loop5: detected capacity change from 0 to 8 May 17 00:21:09.664885 kernel: loop6: detected capacity change from 0 to 229808 May 17 00:21:09.683163 kernel: loop7: detected capacity change from 0 to 140768 May 17 00:21:09.695281 (sd-merge)[1192]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. May 17 00:21:09.695910 (sd-merge)[1192]: Merged extensions into '/usr'. May 17 00:21:09.702117 systemd[1]: Reloading requested from client PID 1163 ('systemd-sysext') (unit systemd-sysext.service)... May 17 00:21:09.702204 systemd[1]: Reloading... May 17 00:21:09.802757 zram_generator::config[1218]: No configuration found. May 17 00:21:09.923068 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:21:09.948417 ldconfig[1158]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:21:09.963218 systemd[1]: Reloading finished in 260 ms. May 17 00:21:09.985719 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 17 00:21:09.991130 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 17 00:21:10.004140 systemd[1]: Starting ensure-sysext.service... May 17 00:21:10.007815 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:21:10.026702 systemd[1]: Reloading requested from client PID 1261 ('systemctl') (unit ensure-sysext.service)... May 17 00:21:10.026716 systemd[1]: Reloading... May 17 00:21:10.061357 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:21:10.061976 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 17 00:21:10.062952 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:21:10.063256 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. May 17 00:21:10.063376 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. May 17 00:21:10.068835 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:21:10.068908 systemd-tmpfiles[1263]: Skipping /boot May 17 00:21:10.080490 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:21:10.080550 systemd-tmpfiles[1263]: Skipping /boot May 17 00:21:10.117704 zram_generator::config[1289]: No configuration found. May 17 00:21:10.209859 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:21:10.242692 systemd[1]: Reloading finished in 215 ms. May 17 00:21:10.259785 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 17 00:21:10.264100 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:21:10.285865 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:21:10.291828 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 17 00:21:10.298944 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 17 00:21:10.304845 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:21:10.308897 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:21:10.312895 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 17 00:21:10.315290 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:21:10.315441 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:21:10.325605 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:21:10.334632 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:21:10.337225 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:21:10.340324 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:21:10.340816 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:21:10.341599 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:21:10.342585 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:21:10.344358 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:21:10.344717 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:21:10.356389 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:21:10.365949 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 17 00:21:10.367074 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:21:10.367346 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:21:10.374096 systemd-udevd[1345]: Using default interface naming scheme 'v255'. May 17 00:21:10.374497 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 17 00:21:10.377636 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:21:10.378723 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:21:10.383258 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:21:10.392026 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:21:10.400835 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:21:10.401873 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:21:10.401973 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:21:10.403232 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 17 00:21:10.409124 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:21:10.409278 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:21:10.413727 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:21:10.414263 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:21:10.418469 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 17 00:21:10.419714 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:21:10.420358 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:21:10.420781 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:21:10.426185 augenrules[1368]: No rules May 17 00:21:10.427602 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:21:10.427797 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:21:10.429199 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:21:10.430557 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:21:10.431016 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:21:10.434254 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 17 00:21:10.442424 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:21:10.443099 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:21:10.447739 systemd[1]: Finished ensure-sysext.service. May 17 00:21:10.452210 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:21:10.453173 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 17 00:21:10.455494 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 17 00:21:10.458595 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:21:10.458910 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:21:10.470932 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:21:10.476575 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 17 00:21:10.477316 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:21:10.559562 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 17 00:21:10.591582 systemd-resolved[1344]: Positive Trust Anchors: May 17 00:21:10.593639 systemd-resolved[1344]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:21:10.593692 systemd-resolved[1344]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:21:10.605741 systemd-resolved[1344]: Defaulting to hostname 'linux'. May 17 00:21:10.613080 systemd-networkd[1392]: lo: Link UP May 17 00:21:10.613088 systemd-networkd[1392]: lo: Gained carrier May 17 00:21:10.614648 systemd-networkd[1392]: Enumeration completed May 17 00:21:10.615392 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:21:10.621088 systemd-networkd[1392]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:21:10.621161 systemd-networkd[1392]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:21:10.624761 systemd-networkd[1392]: eth0: Link UP May 17 00:21:10.625919 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 17 00:21:10.626821 systemd-networkd[1392]: eth0: Gained carrier May 17 00:21:10.627080 systemd-networkd[1392]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:21:10.639720 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 17 00:21:10.639817 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:21:10.640429 systemd[1]: Reached target network.target - Network. May 17 00:21:10.641139 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:21:10.653434 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 17 00:21:10.654336 systemd[1]: Reached target time-set.target - System Time Set. May 17 00:21:10.655866 kernel: ACPI: button: Power Button [PWRF] May 17 00:21:10.665774 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 17 00:21:10.683386 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:21:10.687702 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 17 00:21:10.690254 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 17 00:21:10.690459 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 17 00:21:10.721693 kernel: EDAC MC: Ver: 3.0.0 May 17 00:21:10.734770 kernel: mousedev: PS/2 mouse device common for all mice May 17 00:21:10.739654 systemd-networkd[1392]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:21:10.753681 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1401) May 17 00:21:10.793384 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 17 00:21:10.825980 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 17 00:21:10.826969 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:21:10.835839 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 17 00:21:10.838244 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 17 00:21:10.850920 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:21:10.863769 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 17 00:21:10.881999 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 17 00:21:10.883277 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:21:10.883894 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:21:10.884606 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 17 00:21:10.885269 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 17 00:21:10.886105 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 17 00:21:10.886955 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 17 00:21:10.887791 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 17 00:21:10.888630 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:21:10.888685 systemd[1]: Reached target paths.target - Path Units. May 17 00:21:10.889418 systemd[1]: Reached target timers.target - Timer Units. May 17 00:21:10.891395 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 17 00:21:10.893464 systemd[1]: Starting docker.socket - Docker Socket for the API... May 17 00:21:10.902992 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 17 00:21:10.905278 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 17 00:21:10.906609 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 17 00:21:10.907430 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:21:10.908001 systemd[1]: Reached target basic.target - Basic System. May 17 00:21:10.908547 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 17 00:21:10.908588 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 17 00:21:10.911775 systemd[1]: Starting containerd.service - containerd container runtime... May 17 00:21:10.915642 lvm[1437]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:21:10.921805 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 17 00:21:10.927811 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 17 00:21:10.930786 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 17 00:21:10.942874 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 17 00:21:10.943551 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 17 00:21:10.949335 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 17 00:21:10.955760 jq[1441]: false May 17 00:21:10.952820 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 17 00:21:10.956124 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 17 00:21:10.959775 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 17 00:21:10.973442 systemd[1]: Starting systemd-logind.service - User Login Management... May 17 00:21:10.979188 dbus-daemon[1440]: [system] SELinux support is enabled May 17 00:21:10.974569 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 00:21:10.975031 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 17 00:21:10.980003 systemd[1]: Starting update-engine.service - Update Engine... May 17 00:21:10.983780 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 17 00:21:10.992584 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 17 00:21:11.004683 extend-filesystems[1442]: Found loop4 May 17 00:21:11.004683 extend-filesystems[1442]: Found loop5 May 17 00:21:11.004683 extend-filesystems[1442]: Found loop6 May 17 00:21:11.004683 extend-filesystems[1442]: Found loop7 May 17 00:21:11.004683 extend-filesystems[1442]: Found sda May 17 00:21:11.004683 extend-filesystems[1442]: Found sda1 May 17 00:21:11.004683 extend-filesystems[1442]: Found sda2 May 17 00:21:11.004683 extend-filesystems[1442]: Found sda3 May 17 00:21:11.004683 extend-filesystems[1442]: Found usr May 17 00:21:11.004683 extend-filesystems[1442]: Found sda4 May 17 00:21:11.004683 extend-filesystems[1442]: Found sda6 May 17 00:21:11.004683 extend-filesystems[1442]: Found sda7 May 17 00:21:11.004683 extend-filesystems[1442]: Found sda9 May 17 00:21:11.004683 extend-filesystems[1442]: Checking size of /dev/sda9 May 17 00:21:10.999095 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 17 00:21:11.058528 dbus-daemon[1440]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1392 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 17 00:21:11.079088 coreos-metadata[1439]: May 17 00:21:11.049 INFO Putting http://169.254.169.254/v1/token: Attempt #1 May 17 00:21:11.006821 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:21:11.079466 update_engine[1451]: I20250517 00:21:11.027053 1451 main.cc:92] Flatcar Update Engine starting May 17 00:21:11.079466 update_engine[1451]: I20250517 00:21:11.036327 1451 update_check_scheduler.cc:74] Next update check in 11m56s May 17 00:21:11.007051 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 17 00:21:11.020321 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:21:11.079952 jq[1452]: true May 17 00:21:11.020361 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 17 00:21:11.027313 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:21:11.027333 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 17 00:21:11.036737 systemd[1]: Started update-engine.service - Update Engine. May 17 00:21:11.041173 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 17 00:21:11.045128 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:21:11.085572 extend-filesystems[1442]: Resized partition /dev/sda9 May 17 00:21:11.045354 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 17 00:21:11.086764 extend-filesystems[1479]: resize2fs 1.47.1 (20-May-2024) May 17 00:21:11.055744 systemd-networkd[1392]: eth0: DHCPv4 address 172.233.222.141/24, gateway 172.233.222.1 acquired from 23.210.200.20 May 17 00:21:11.057710 systemd-timesyncd[1396]: Network configuration changed, trying to establish connection. May 17 00:21:11.069423 (ntainerd)[1467]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 17 00:21:11.071145 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... May 17 00:21:11.093759 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks May 17 00:21:11.093794 jq[1468]: true May 17 00:21:11.093214 systemd-timesyncd[1396]: Contacted time server 85.209.17.10:123 (0.flatcar.pool.ntp.org). May 17 00:21:11.093456 systemd-timesyncd[1396]: Initial clock synchronization to Sat 2025-05-17 00:21:11.371097 UTC. May 17 00:21:11.111064 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:21:11.111447 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 17 00:21:11.111778 tar[1456]: linux-amd64/LICENSE May 17 00:21:11.111974 tar[1456]: linux-amd64/helm May 17 00:21:11.229751 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1401) May 17 00:21:11.254178 systemd-logind[1450]: Watching system buttons on /dev/input/event1 (Power Button) May 17 00:21:11.254212 systemd-logind[1450]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 17 00:21:11.255369 systemd-logind[1450]: New seat seat0. May 17 00:21:11.259285 systemd[1]: Started systemd-logind.service - User Login Management. May 17 00:21:11.268119 bash[1501]: Updated "/home/core/.ssh/authorized_keys" May 17 00:21:11.274808 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 17 00:21:11.289851 systemd[1]: Starting sshkeys.service... May 17 00:21:11.344773 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 17 00:21:11.348889 dbus-daemon[1440]: [system] Successfully activated service 'org.freedesktop.hostname1' May 17 00:21:11.352506 dbus-daemon[1440]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1475 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 17 00:21:11.353604 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 17 00:21:11.354860 kernel: EXT4-fs (sda9): resized filesystem to 20360187 May 17 00:21:11.356575 systemd[1]: Started systemd-hostnamed.service - Hostname Service. May 17 00:21:11.365155 systemd[1]: Starting polkit.service - Authorization Manager... May 17 00:21:11.374595 extend-filesystems[1479]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required May 17 00:21:11.374595 extend-filesystems[1479]: old_desc_blocks = 1, new_desc_blocks = 10 May 17 00:21:11.374595 extend-filesystems[1479]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. May 17 00:21:11.373750 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:21:11.390042 extend-filesystems[1442]: Resized filesystem in /dev/sda9 May 17 00:21:11.373952 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 17 00:21:11.419158 polkitd[1510]: Started polkitd version 121 May 17 00:21:11.434401 polkitd[1510]: Loading rules from directory /etc/polkit-1/rules.d May 17 00:21:11.434488 polkitd[1510]: Loading rules from directory /usr/share/polkit-1/rules.d May 17 00:21:11.439959 polkitd[1510]: Finished loading, compiling and executing 2 rules May 17 00:21:11.440340 dbus-daemon[1440]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 17 00:21:11.440477 systemd[1]: Started polkit.service - Authorization Manager. May 17 00:21:11.463087 polkitd[1510]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 17 00:21:11.485397 systemd-resolved[1344]: System hostname changed to '172-233-222-141'. May 17 00:21:11.485599 systemd-hostnamed[1475]: Hostname set to <172-233-222-141> (transient) May 17 00:21:11.498989 coreos-metadata[1509]: May 17 00:21:11.498 INFO Putting http://169.254.169.254/v1/token: Attempt #1 May 17 00:21:11.503349 containerd[1467]: time="2025-05-17T00:21:11.501820198Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 17 00:21:11.564578 containerd[1467]: time="2025-05-17T00:21:11.564490869Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:21:11.569529 containerd[1467]: time="2025-05-17T00:21:11.569499612Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.90-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:21:11.569739 locksmithd[1466]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:21:11.571501 containerd[1467]: time="2025-05-17T00:21:11.570232632Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:21:11.571501 containerd[1467]: time="2025-05-17T00:21:11.570256372Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:21:11.571501 containerd[1467]: time="2025-05-17T00:21:11.570410752Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 17 00:21:11.571501 containerd[1467]: time="2025-05-17T00:21:11.570426562Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 17 00:21:11.571501 containerd[1467]: time="2025-05-17T00:21:11.570490812Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:21:11.571501 containerd[1467]: time="2025-05-17T00:21:11.570503562Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:21:11.571501 containerd[1467]: time="2025-05-17T00:21:11.570715942Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:21:11.571501 containerd[1467]: time="2025-05-17T00:21:11.570733892Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:21:11.571501 containerd[1467]: time="2025-05-17T00:21:11.570747492Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:21:11.571501 containerd[1467]: time="2025-05-17T00:21:11.570756982Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:21:11.571501 containerd[1467]: time="2025-05-17T00:21:11.570835722Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:21:11.571501 containerd[1467]: time="2025-05-17T00:21:11.571052552Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:21:11.572692 containerd[1467]: time="2025-05-17T00:21:11.571173282Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:21:11.572692 containerd[1467]: time="2025-05-17T00:21:11.571186082Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:21:11.572692 containerd[1467]: time="2025-05-17T00:21:11.571279623Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:21:11.572692 containerd[1467]: time="2025-05-17T00:21:11.571335743Z" level=info msg="metadata content store policy set" policy=shared May 17 00:21:11.577724 containerd[1467]: time="2025-05-17T00:21:11.577708536Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:21:11.577807 containerd[1467]: time="2025-05-17T00:21:11.577794786Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:21:11.577871 containerd[1467]: time="2025-05-17T00:21:11.577860096Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 17 00:21:11.577926 containerd[1467]: time="2025-05-17T00:21:11.577905236Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 17 00:21:11.577976 containerd[1467]: time="2025-05-17T00:21:11.577965356Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:21:11.578149 containerd[1467]: time="2025-05-17T00:21:11.578134756Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:21:11.578569 containerd[1467]: time="2025-05-17T00:21:11.578522036Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:21:11.578797 containerd[1467]: time="2025-05-17T00:21:11.578767106Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 17 00:21:11.578822 containerd[1467]: time="2025-05-17T00:21:11.578798036Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 17 00:21:11.578822 containerd[1467]: time="2025-05-17T00:21:11.578815716Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 17 00:21:11.578863 containerd[1467]: time="2025-05-17T00:21:11.578832546Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:21:11.578863 containerd[1467]: time="2025-05-17T00:21:11.578850726Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:21:11.578891 containerd[1467]: time="2025-05-17T00:21:11.578865636Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:21:11.578891 containerd[1467]: time="2025-05-17T00:21:11.578883966Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:21:11.578931 containerd[1467]: time="2025-05-17T00:21:11.578902046Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:21:11.578931 containerd[1467]: time="2025-05-17T00:21:11.578918376Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:21:11.578959 containerd[1467]: time="2025-05-17T00:21:11.578931756Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:21:11.578959 containerd[1467]: time="2025-05-17T00:21:11.578945506Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:21:11.578992 containerd[1467]: time="2025-05-17T00:21:11.578969636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:21:11.578992 containerd[1467]: time="2025-05-17T00:21:11.578987336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:21:11.579027 containerd[1467]: time="2025-05-17T00:21:11.579000296Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:21:11.579027 containerd[1467]: time="2025-05-17T00:21:11.579014836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:21:11.579063 containerd[1467]: time="2025-05-17T00:21:11.579036206Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:21:11.579118 containerd[1467]: time="2025-05-17T00:21:11.579091936Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:21:11.579142 containerd[1467]: time="2025-05-17T00:21:11.579134746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:21:11.579169 containerd[1467]: time="2025-05-17T00:21:11.579153126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:21:11.579187 containerd[1467]: time="2025-05-17T00:21:11.579167786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 17 00:21:11.579203 containerd[1467]: time="2025-05-17T00:21:11.579190936Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 17 00:21:11.579219 containerd[1467]: time="2025-05-17T00:21:11.579205147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:21:11.579240 containerd[1467]: time="2025-05-17T00:21:11.579218797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 17 00:21:11.579240 containerd[1467]: time="2025-05-17T00:21:11.579234317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:21:11.579269 containerd[1467]: time="2025-05-17T00:21:11.579256297Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 17 00:21:11.579302 containerd[1467]: time="2025-05-17T00:21:11.579281547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 17 00:21:11.579319 containerd[1467]: time="2025-05-17T00:21:11.579301527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:21:11.579319 containerd[1467]: time="2025-05-17T00:21:11.579315997Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:21:11.579391 containerd[1467]: time="2025-05-17T00:21:11.579363817Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:21:11.579414 containerd[1467]: time="2025-05-17T00:21:11.579396927Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 17 00:21:11.579414 containerd[1467]: time="2025-05-17T00:21:11.579408587Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:21:11.579455 containerd[1467]: time="2025-05-17T00:21:11.579423327Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 17 00:21:11.579455 containerd[1467]: time="2025-05-17T00:21:11.579433817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:21:11.579455 containerd[1467]: time="2025-05-17T00:21:11.579447067Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 17 00:21:11.579503 containerd[1467]: time="2025-05-17T00:21:11.579459087Z" level=info msg="NRI interface is disabled by configuration." May 17 00:21:11.579503 containerd[1467]: time="2025-05-17T00:21:11.579471157Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:21:11.579811 containerd[1467]: time="2025-05-17T00:21:11.579750547Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:21:11.579939 containerd[1467]: time="2025-05-17T00:21:11.579815127Z" level=info msg="Connect containerd service" May 17 00:21:11.579939 containerd[1467]: time="2025-05-17T00:21:11.579864457Z" level=info msg="using legacy CRI server" May 17 00:21:11.579939 containerd[1467]: time="2025-05-17T00:21:11.579871517Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 17 00:21:11.579982 containerd[1467]: time="2025-05-17T00:21:11.579955507Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:21:11.586910 containerd[1467]: time="2025-05-17T00:21:11.585895660Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:21:11.586910 containerd[1467]: time="2025-05-17T00:21:11.586028680Z" level=info msg="Start subscribing containerd event" May 17 00:21:11.586910 containerd[1467]: time="2025-05-17T00:21:11.586073810Z" level=info msg="Start recovering state" May 17 00:21:11.586910 containerd[1467]: time="2025-05-17T00:21:11.586137390Z" level=info msg="Start event monitor" May 17 00:21:11.586910 containerd[1467]: time="2025-05-17T00:21:11.586154690Z" level=info msg="Start snapshots syncer" May 17 00:21:11.586910 containerd[1467]: time="2025-05-17T00:21:11.586164350Z" level=info msg="Start cni network conf syncer for default" May 17 00:21:11.586910 containerd[1467]: time="2025-05-17T00:21:11.586174570Z" level=info msg="Start streaming server" May 17 00:21:11.587034 containerd[1467]: time="2025-05-17T00:21:11.586936210Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:21:11.587034 containerd[1467]: time="2025-05-17T00:21:11.586997980Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:21:11.587186 systemd[1]: Started containerd.service - containerd container runtime. May 17 00:21:11.589934 containerd[1467]: time="2025-05-17T00:21:11.589711452Z" level=info msg="containerd successfully booted in 0.089656s" May 17 00:21:11.602792 coreos-metadata[1509]: May 17 00:21:11.602 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 May 17 00:21:11.607103 sshd_keygen[1482]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:21:11.629952 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 17 00:21:11.638649 systemd[1]: Starting issuegen.service - Generate /run/issue... May 17 00:21:11.646493 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:21:11.646784 systemd[1]: Finished issuegen.service - Generate /run/issue. May 17 00:21:11.656741 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 17 00:21:11.663559 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 17 00:21:11.672052 systemd[1]: Started getty@tty1.service - Getty on tty1. May 17 00:21:11.674340 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 17 00:21:11.675439 systemd[1]: Reached target getty.target - Login Prompts. May 17 00:21:11.735483 coreos-metadata[1509]: May 17 00:21:11.735 INFO Fetch successful May 17 00:21:11.754532 update-ssh-keys[1551]: Updated "/home/core/.ssh/authorized_keys" May 17 00:21:11.756013 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 17 00:21:11.760169 systemd[1]: Finished sshkeys.service. May 17 00:21:11.803030 tar[1456]: linux-amd64/README.md May 17 00:21:11.814645 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 17 00:21:12.059303 coreos-metadata[1439]: May 17 00:21:12.059 INFO Putting http://169.254.169.254/v1/token: Attempt #2 May 17 00:21:12.152613 coreos-metadata[1439]: May 17 00:21:12.152 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 May 17 00:21:12.374792 coreos-metadata[1439]: May 17 00:21:12.374 INFO Fetch successful May 17 00:21:12.374792 coreos-metadata[1439]: May 17 00:21:12.374 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 May 17 00:21:12.391380 systemd-networkd[1392]: eth0: Gained IPv6LL May 17 00:21:12.394595 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 17 00:21:12.395830 systemd[1]: Reached target network-online.target - Network is Online. May 17 00:21:12.402855 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:21:12.405734 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 17 00:21:12.426377 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 17 00:21:12.637596 coreos-metadata[1439]: May 17 00:21:12.637 INFO Fetch successful May 17 00:21:12.711750 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 17 00:21:12.713431 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 17 00:21:13.273036 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:21:13.274346 systemd[1]: Reached target multi-user.target - Multi-User System. May 17 00:21:13.274398 (kubelet)[1594]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:21:13.277768 systemd[1]: Startup finished in 786ms (kernel) + 6.756s (initrd) + 4.874s (userspace) = 12.417s. May 17 00:21:13.834814 kubelet[1594]: E0517 00:21:13.834729 1594 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:21:13.839238 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:21:13.839454 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:21:15.477386 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 17 00:21:15.481892 systemd[1]: Started sshd@0-172.233.222.141:22-139.178.89.65:60328.service - OpenSSH per-connection server daemon (139.178.89.65:60328). May 17 00:21:15.830175 sshd[1606]: Accepted publickey for core from 139.178.89.65 port 60328 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:21:15.832566 sshd[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:21:15.857001 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 17 00:21:15.866107 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 17 00:21:15.868924 systemd-logind[1450]: New session 1 of user core. May 17 00:21:15.883596 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 17 00:21:15.889902 systemd[1]: Starting user@500.service - User Manager for UID 500... May 17 00:21:15.904639 (systemd)[1610]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:21:15.994398 systemd[1610]: Queued start job for default target default.target. May 17 00:21:16.003794 systemd[1610]: Created slice app.slice - User Application Slice. May 17 00:21:16.003819 systemd[1610]: Reached target paths.target - Paths. May 17 00:21:16.003830 systemd[1610]: Reached target timers.target - Timers. May 17 00:21:16.005136 systemd[1610]: Starting dbus.socket - D-Bus User Message Bus Socket... May 17 00:21:16.017009 systemd[1610]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 17 00:21:16.017165 systemd[1610]: Reached target sockets.target - Sockets. May 17 00:21:16.017184 systemd[1610]: Reached target basic.target - Basic System. May 17 00:21:16.017236 systemd[1610]: Reached target default.target - Main User Target. May 17 00:21:16.017280 systemd[1610]: Startup finished in 105ms. May 17 00:21:16.017422 systemd[1]: Started user@500.service - User Manager for UID 500. May 17 00:21:16.018844 systemd[1]: Started session-1.scope - Session 1 of User core. May 17 00:21:16.291905 systemd[1]: Started sshd@1-172.233.222.141:22-139.178.89.65:60334.service - OpenSSH per-connection server daemon (139.178.89.65:60334). May 17 00:21:16.627339 sshd[1621]: Accepted publickey for core from 139.178.89.65 port 60334 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:21:16.629456 sshd[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:21:16.642439 systemd-logind[1450]: New session 2 of user core. May 17 00:21:16.652823 systemd[1]: Started session-2.scope - Session 2 of User core. May 17 00:21:16.882952 sshd[1621]: pam_unix(sshd:session): session closed for user core May 17 00:21:16.887581 systemd-logind[1450]: Session 2 logged out. Waiting for processes to exit. May 17 00:21:16.888844 systemd[1]: sshd@1-172.233.222.141:22-139.178.89.65:60334.service: Deactivated successfully. May 17 00:21:16.890971 systemd[1]: session-2.scope: Deactivated successfully. May 17 00:21:16.891831 systemd-logind[1450]: Removed session 2. May 17 00:21:16.944473 systemd[1]: Started sshd@2-172.233.222.141:22-139.178.89.65:51782.service - OpenSSH per-connection server daemon (139.178.89.65:51782). May 17 00:21:17.294650 sshd[1628]: Accepted publickey for core from 139.178.89.65 port 51782 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:21:17.296430 sshd[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:21:17.300919 systemd-logind[1450]: New session 3 of user core. May 17 00:21:17.306795 systemd[1]: Started session-3.scope - Session 3 of User core. May 17 00:21:17.547581 sshd[1628]: pam_unix(sshd:session): session closed for user core May 17 00:21:17.551847 systemd-logind[1450]: Session 3 logged out. Waiting for processes to exit. May 17 00:21:17.553048 systemd[1]: sshd@2-172.233.222.141:22-139.178.89.65:51782.service: Deactivated successfully. May 17 00:21:17.555112 systemd[1]: session-3.scope: Deactivated successfully. May 17 00:21:17.555942 systemd-logind[1450]: Removed session 3. May 17 00:21:17.609158 systemd[1]: Started sshd@3-172.233.222.141:22-139.178.89.65:51786.service - OpenSSH per-connection server daemon (139.178.89.65:51786). May 17 00:21:17.953431 sshd[1635]: Accepted publickey for core from 139.178.89.65 port 51786 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:21:17.955181 sshd[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:21:17.959190 systemd-logind[1450]: New session 4 of user core. May 17 00:21:17.965795 systemd[1]: Started session-4.scope - Session 4 of User core. May 17 00:21:18.211304 sshd[1635]: pam_unix(sshd:session): session closed for user core May 17 00:21:18.214629 systemd-logind[1450]: Session 4 logged out. Waiting for processes to exit. May 17 00:21:18.215534 systemd[1]: sshd@3-172.233.222.141:22-139.178.89.65:51786.service: Deactivated successfully. May 17 00:21:18.217307 systemd[1]: session-4.scope: Deactivated successfully. May 17 00:21:18.218206 systemd-logind[1450]: Removed session 4. May 17 00:21:18.271840 systemd[1]: Started sshd@4-172.233.222.141:22-139.178.89.65:51792.service - OpenSSH per-connection server daemon (139.178.89.65:51792). May 17 00:21:18.613430 sshd[1642]: Accepted publickey for core from 139.178.89.65 port 51792 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:21:18.615169 sshd[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:21:18.620187 systemd-logind[1450]: New session 5 of user core. May 17 00:21:18.625793 systemd[1]: Started session-5.scope - Session 5 of User core. May 17 00:21:18.825113 sudo[1645]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 17 00:21:18.825461 sudo[1645]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:21:18.843049 sudo[1645]: pam_unix(sudo:session): session closed for user root May 17 00:21:18.895474 sshd[1642]: pam_unix(sshd:session): session closed for user core May 17 00:21:18.898720 systemd[1]: sshd@4-172.233.222.141:22-139.178.89.65:51792.service: Deactivated successfully. May 17 00:21:18.900991 systemd[1]: session-5.scope: Deactivated successfully. May 17 00:21:18.902230 systemd-logind[1450]: Session 5 logged out. Waiting for processes to exit. May 17 00:21:18.903443 systemd-logind[1450]: Removed session 5. May 17 00:21:18.956482 systemd[1]: Started sshd@5-172.233.222.141:22-139.178.89.65:51800.service - OpenSSH per-connection server daemon (139.178.89.65:51800). May 17 00:21:19.294368 sshd[1650]: Accepted publickey for core from 139.178.89.65 port 51800 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:21:19.295965 sshd[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:21:19.300480 systemd-logind[1450]: New session 6 of user core. May 17 00:21:19.309797 systemd[1]: Started session-6.scope - Session 6 of User core. May 17 00:21:19.497136 sudo[1654]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 17 00:21:19.497480 sudo[1654]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:21:19.501065 sudo[1654]: pam_unix(sudo:session): session closed for user root May 17 00:21:19.506986 sudo[1653]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 17 00:21:19.507316 sudo[1653]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:21:19.525873 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 17 00:21:19.527318 auditctl[1657]: No rules May 17 00:21:19.527737 systemd[1]: audit-rules.service: Deactivated successfully. May 17 00:21:19.527946 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 17 00:21:19.530609 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:21:19.557446 augenrules[1675]: No rules May 17 00:21:19.558334 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:21:19.559472 sudo[1653]: pam_unix(sudo:session): session closed for user root May 17 00:21:19.612363 sshd[1650]: pam_unix(sshd:session): session closed for user core May 17 00:21:19.615990 systemd[1]: sshd@5-172.233.222.141:22-139.178.89.65:51800.service: Deactivated successfully. May 17 00:21:19.618184 systemd[1]: session-6.scope: Deactivated successfully. May 17 00:21:19.618765 systemd-logind[1450]: Session 6 logged out. Waiting for processes to exit. May 17 00:21:19.619595 systemd-logind[1450]: Removed session 6. May 17 00:21:19.673276 systemd[1]: Started sshd@6-172.233.222.141:22-139.178.89.65:51816.service - OpenSSH per-connection server daemon (139.178.89.65:51816). May 17 00:21:20.006435 sshd[1683]: Accepted publickey for core from 139.178.89.65 port 51816 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:21:20.008303 sshd[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:21:20.012927 systemd-logind[1450]: New session 7 of user core. May 17 00:21:20.015815 systemd[1]: Started session-7.scope - Session 7 of User core. May 17 00:21:20.206929 sudo[1686]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:21:20.207259 sudo[1686]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:21:20.459103 systemd[1]: Starting docker.service - Docker Application Container Engine... May 17 00:21:20.461874 (dockerd)[1703]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 17 00:21:20.704277 dockerd[1703]: time="2025-05-17T00:21:20.704206028Z" level=info msg="Starting up" May 17 00:21:20.791589 dockerd[1703]: time="2025-05-17T00:21:20.791470961Z" level=info msg="Loading containers: start." May 17 00:21:20.895562 kernel: Initializing XFRM netlink socket May 17 00:21:20.971348 systemd-networkd[1392]: docker0: Link UP May 17 00:21:20.988006 dockerd[1703]: time="2025-05-17T00:21:20.987969856Z" level=info msg="Loading containers: done." May 17 00:21:21.004484 dockerd[1703]: time="2025-05-17T00:21:21.004443599Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 00:21:21.004627 dockerd[1703]: time="2025-05-17T00:21:21.004525476Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 17 00:21:21.004653 dockerd[1703]: time="2025-05-17T00:21:21.004625864Z" level=info msg="Daemon has completed initialization" May 17 00:21:21.027320 dockerd[1703]: time="2025-05-17T00:21:21.027279435Z" level=info msg="API listen on /run/docker.sock" May 17 00:21:21.027814 systemd[1]: Started docker.service - Docker Application Container Engine. May 17 00:21:21.770277 containerd[1467]: time="2025-05-17T00:21:21.770235890Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\"" May 17 00:21:22.469030 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2766237889.mount: Deactivated successfully. May 17 00:21:23.350401 containerd[1467]: time="2025-05-17T00:21:23.350339655Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:23.351340 containerd[1467]: time="2025-05-17T00:21:23.351307517Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.1: active requests=0, bytes read=30075403" May 17 00:21:23.351803 containerd[1467]: time="2025-05-17T00:21:23.351745341Z" level=info msg="ImageCreate event name:\"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:23.354022 containerd[1467]: time="2025-05-17T00:21:23.353988293Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:23.355710 containerd[1467]: time="2025-05-17T00:21:23.355028267Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.1\" with image id \"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\", size \"30072203\" in 1.584756566s" May 17 00:21:23.355710 containerd[1467]: time="2025-05-17T00:21:23.355060453Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\" returns image reference \"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\"" May 17 00:21:23.355775 containerd[1467]: time="2025-05-17T00:21:23.355712094Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\"" May 17 00:21:24.089985 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 00:21:24.098838 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:21:24.270753 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:21:24.275581 (kubelet)[1908]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:21:24.318537 kubelet[1908]: E0517 00:21:24.318269 1908 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:21:24.325017 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:21:24.325215 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:21:24.702514 containerd[1467]: time="2025-05-17T00:21:24.702459661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:24.703479 containerd[1467]: time="2025-05-17T00:21:24.703442906Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.1: active requests=0, bytes read=26011390" May 17 00:21:24.704337 containerd[1467]: time="2025-05-17T00:21:24.703974008Z" level=info msg="ImageCreate event name:\"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:24.706425 containerd[1467]: time="2025-05-17T00:21:24.706390789Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:24.707423 containerd[1467]: time="2025-05-17T00:21:24.707387799Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.1\" with image id \"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\", size \"27638910\" in 1.351651703s" May 17 00:21:24.707458 containerd[1467]: time="2025-05-17T00:21:24.707422903Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\" returns image reference \"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\"" May 17 00:21:24.708460 containerd[1467]: time="2025-05-17T00:21:24.708421112Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\"" May 17 00:21:25.193326 systemd[1]: Started sshd@7-172.233.222.141:22-68.69.184.230:50082.service - OpenSSH per-connection server daemon (68.69.184.230:50082). May 17 00:21:25.794694 containerd[1467]: time="2025-05-17T00:21:25.793077585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:25.794694 containerd[1467]: time="2025-05-17T00:21:25.793874316Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.1: active requests=0, bytes read=20148960" May 17 00:21:25.794694 containerd[1467]: time="2025-05-17T00:21:25.793892647Z" level=info msg="ImageCreate event name:\"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:25.796678 containerd[1467]: time="2025-05-17T00:21:25.796635566Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:25.799005 containerd[1467]: time="2025-05-17T00:21:25.798980944Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.1\" with image id \"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\", size \"21776498\" in 1.090530092s" May 17 00:21:25.799075 containerd[1467]: time="2025-05-17T00:21:25.799056293Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\" returns image reference \"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\"" May 17 00:21:25.800538 containerd[1467]: time="2025-05-17T00:21:25.800513857Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\"" May 17 00:21:26.638338 sshd[1916]: Connection closed by 68.69.184.230 port 50082 May 17 00:21:26.641098 systemd[1]: sshd@7-172.233.222.141:22-68.69.184.230:50082.service: Deactivated successfully. May 17 00:21:26.857271 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount157419107.mount: Deactivated successfully. May 17 00:21:27.249234 containerd[1467]: time="2025-05-17T00:21:27.249173583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:27.250052 containerd[1467]: time="2025-05-17T00:21:27.250009265Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.1: active requests=0, bytes read=31889075" May 17 00:21:27.250810 containerd[1467]: time="2025-05-17T00:21:27.250779198Z" level=info msg="ImageCreate event name:\"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:27.252411 containerd[1467]: time="2025-05-17T00:21:27.252378243Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:27.253087 containerd[1467]: time="2025-05-17T00:21:27.253053806Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.1\" with image id \"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\", repo tag \"registry.k8s.io/kube-proxy:v1.33.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\", size \"31888094\" in 1.452511799s" May 17 00:21:27.253150 containerd[1467]: time="2025-05-17T00:21:27.253135045Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\" returns image reference \"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\"" May 17 00:21:27.254166 containerd[1467]: time="2025-05-17T00:21:27.254143563Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" May 17 00:21:27.938649 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1676491885.mount: Deactivated successfully. May 17 00:21:29.625609 containerd[1467]: time="2025-05-17T00:21:29.625553898Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:29.626530 containerd[1467]: time="2025-05-17T00:21:29.626494884Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" May 17 00:21:29.627314 containerd[1467]: time="2025-05-17T00:21:29.626894233Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:29.629400 containerd[1467]: time="2025-05-17T00:21:29.629365324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:29.630349 containerd[1467]: time="2025-05-17T00:21:29.630315283Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.376145184s" May 17 00:21:29.630399 containerd[1467]: time="2025-05-17T00:21:29.630349716Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" May 17 00:21:29.631219 containerd[1467]: time="2025-05-17T00:21:29.631182185Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 00:21:30.255648 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3870954394.mount: Deactivated successfully. May 17 00:21:30.264125 containerd[1467]: time="2025-05-17T00:21:30.264042166Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:30.264744 containerd[1467]: time="2025-05-17T00:21:30.264696445Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 17 00:21:30.265244 containerd[1467]: time="2025-05-17T00:21:30.265218671Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:30.266920 containerd[1467]: time="2025-05-17T00:21:30.266891871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:30.267890 containerd[1467]: time="2025-05-17T00:21:30.267655242Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 636.443113ms" May 17 00:21:30.267890 containerd[1467]: time="2025-05-17T00:21:30.267699089Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 17 00:21:30.268350 containerd[1467]: time="2025-05-17T00:21:30.268320406Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" May 17 00:21:32.069727 containerd[1467]: time="2025-05-17T00:21:32.069611585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:32.070921 containerd[1467]: time="2025-05-17T00:21:32.070842159Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58142739" May 17 00:21:32.071781 containerd[1467]: time="2025-05-17T00:21:32.071327794Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:32.074221 containerd[1467]: time="2025-05-17T00:21:32.074187436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:32.075340 containerd[1467]: time="2025-05-17T00:21:32.075306803Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 1.806962176s" May 17 00:21:32.075598 containerd[1467]: time="2025-05-17T00:21:32.075583788Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" May 17 00:21:34.575574 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 17 00:21:34.583824 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:21:34.740837 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:21:34.743911 (kubelet)[2028]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:21:34.745631 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:21:34.746087 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:21:34.746299 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:21:34.752873 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:21:34.780791 systemd[1]: Reloading requested from client PID 2036 ('systemctl') (unit session-7.scope)... May 17 00:21:34.780907 systemd[1]: Reloading... May 17 00:21:34.907685 zram_generator::config[2079]: No configuration found. May 17 00:21:34.992591 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:21:35.054741 systemd[1]: Reloading finished in 273 ms. May 17 00:21:35.101797 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 17 00:21:35.101904 systemd[1]: kubelet.service: Failed with result 'signal'. May 17 00:21:35.102154 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:21:35.109045 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:21:35.248234 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:21:35.252983 (kubelet)[2130]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:21:35.289774 kubelet[2130]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:21:35.289774 kubelet[2130]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 00:21:35.289774 kubelet[2130]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:21:35.290071 kubelet[2130]: I0517 00:21:35.289827 2130 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:21:36.154918 kubelet[2130]: I0517 00:21:36.154888 2130 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 17 00:21:36.154918 kubelet[2130]: I0517 00:21:36.154910 2130 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:21:36.155092 kubelet[2130]: I0517 00:21:36.155071 2130 server.go:956] "Client rotation is on, will bootstrap in background" May 17 00:21:36.180554 kubelet[2130]: I0517 00:21:36.180524 2130 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:21:36.181733 kubelet[2130]: E0517 00:21:36.181691 2130 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.233.222.141:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.233.222.141:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" May 17 00:21:36.191108 kubelet[2130]: E0517 00:21:36.191074 2130 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:21:36.191108 kubelet[2130]: I0517 00:21:36.191102 2130 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:21:36.194976 kubelet[2130]: I0517 00:21:36.194946 2130 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:21:36.195311 kubelet[2130]: I0517 00:21:36.195281 2130 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:21:36.195501 kubelet[2130]: I0517 00:21:36.195311 2130 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-233-222-141","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:21:36.195579 kubelet[2130]: I0517 00:21:36.195509 2130 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:21:36.195579 kubelet[2130]: I0517 00:21:36.195522 2130 container_manager_linux.go:303] "Creating device plugin manager" May 17 00:21:36.196582 kubelet[2130]: I0517 00:21:36.196557 2130 state_mem.go:36] "Initialized new in-memory state store" May 17 00:21:36.199011 kubelet[2130]: I0517 00:21:36.198992 2130 kubelet.go:480] "Attempting to sync node with API server" May 17 00:21:36.199011 kubelet[2130]: I0517 00:21:36.199011 2130 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:21:36.199092 kubelet[2130]: I0517 00:21:36.199030 2130 kubelet.go:386] "Adding apiserver pod source" May 17 00:21:36.201057 kubelet[2130]: I0517 00:21:36.200846 2130 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:21:36.207428 kubelet[2130]: I0517 00:21:36.207104 2130 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:21:36.207515 kubelet[2130]: I0517 00:21:36.207476 2130 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 17 00:21:36.208562 kubelet[2130]: W0517 00:21:36.208540 2130 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:21:36.211241 kubelet[2130]: I0517 00:21:36.211038 2130 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 00:21:36.211241 kubelet[2130]: I0517 00:21:36.211077 2130 server.go:1289] "Started kubelet" May 17 00:21:36.211241 kubelet[2130]: E0517 00:21:36.211187 2130 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.233.222.141:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-233-222-141&limit=500&resourceVersion=0\": dial tcp 172.233.222.141:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 17 00:21:36.214073 kubelet[2130]: E0517 00:21:36.213976 2130 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.233.222.141:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.233.222.141:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 17 00:21:36.214757 kubelet[2130]: I0517 00:21:36.214725 2130 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:21:36.216823 kubelet[2130]: I0517 00:21:36.216115 2130 server.go:317] "Adding debug handlers to kubelet server" May 17 00:21:36.216823 kubelet[2130]: I0517 00:21:36.216104 2130 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:21:36.216823 kubelet[2130]: I0517 00:21:36.216539 2130 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:21:36.217864 kubelet[2130]: E0517 00:21:36.216700 2130 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.233.222.141:6443/api/v1/namespaces/default/events\": dial tcp 172.233.222.141:6443: connect: connection refused" event="&Event{ObjectMeta:{172-233-222-141.18402899be8217a5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-233-222-141,UID:172-233-222-141,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-233-222-141,},FirstTimestamp:2025-05-17 00:21:36.211056549 +0000 UTC m=+0.954480360,LastTimestamp:2025-05-17 00:21:36.211056549 +0000 UTC m=+0.954480360,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-233-222-141,}" May 17 00:21:36.219365 kubelet[2130]: E0517 00:21:36.219337 2130 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:21:36.219837 kubelet[2130]: I0517 00:21:36.219806 2130 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:21:36.220694 kubelet[2130]: I0517 00:21:36.219978 2130 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:21:36.222381 kubelet[2130]: E0517 00:21:36.222354 2130 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-233-222-141\" not found" May 17 00:21:36.222423 kubelet[2130]: I0517 00:21:36.222386 2130 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 00:21:36.222521 kubelet[2130]: I0517 00:21:36.222503 2130 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 00:21:36.222569 kubelet[2130]: I0517 00:21:36.222555 2130 reconciler.go:26] "Reconciler: start to sync state" May 17 00:21:36.223138 kubelet[2130]: E0517 00:21:36.223101 2130 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.233.222.141:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.233.222.141:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 17 00:21:36.224011 kubelet[2130]: I0517 00:21:36.223992 2130 factory.go:223] Registration of the containerd container factory successfully May 17 00:21:36.224011 kubelet[2130]: I0517 00:21:36.224009 2130 factory.go:223] Registration of the systemd container factory successfully May 17 00:21:36.224090 kubelet[2130]: I0517 00:21:36.224059 2130 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:21:36.228214 kubelet[2130]: I0517 00:21:36.228182 2130 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 17 00:21:36.237392 kubelet[2130]: E0517 00:21:36.237349 2130 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.233.222.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-233-222-141?timeout=10s\": dial tcp 172.233.222.141:6443: connect: connection refused" interval="200ms" May 17 00:21:36.248968 kubelet[2130]: I0517 00:21:36.248937 2130 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 17 00:21:36.248968 kubelet[2130]: I0517 00:21:36.248960 2130 status_manager.go:230] "Starting to sync pod status with apiserver" May 17 00:21:36.249038 kubelet[2130]: I0517 00:21:36.248973 2130 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 00:21:36.249038 kubelet[2130]: I0517 00:21:36.248979 2130 kubelet.go:2436] "Starting kubelet main sync loop" May 17 00:21:36.249038 kubelet[2130]: E0517 00:21:36.249012 2130 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:21:36.252745 kubelet[2130]: E0517 00:21:36.252718 2130 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.233.222.141:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.233.222.141:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" May 17 00:21:36.257165 kubelet[2130]: I0517 00:21:36.257150 2130 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 00:21:36.257165 kubelet[2130]: I0517 00:21:36.257162 2130 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 00:21:36.257239 kubelet[2130]: I0517 00:21:36.257176 2130 state_mem.go:36] "Initialized new in-memory state store" May 17 00:21:36.258694 kubelet[2130]: I0517 00:21:36.258626 2130 policy_none.go:49] "None policy: Start" May 17 00:21:36.258694 kubelet[2130]: I0517 00:21:36.258642 2130 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 00:21:36.258694 kubelet[2130]: I0517 00:21:36.258653 2130 state_mem.go:35] "Initializing new in-memory state store" May 17 00:21:36.264171 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 17 00:21:36.275873 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 17 00:21:36.279489 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 17 00:21:36.293695 kubelet[2130]: E0517 00:21:36.293375 2130 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 17 00:21:36.293695 kubelet[2130]: I0517 00:21:36.293549 2130 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:21:36.293695 kubelet[2130]: I0517 00:21:36.293560 2130 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:21:36.294230 kubelet[2130]: I0517 00:21:36.294068 2130 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:21:36.295956 kubelet[2130]: E0517 00:21:36.295941 2130 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 00:21:36.296108 kubelet[2130]: E0517 00:21:36.296097 2130 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-233-222-141\" not found" May 17 00:21:36.358239 systemd[1]: Created slice kubepods-burstable-pod8dae47af0ef9b7653c150050b2a22560.slice - libcontainer container kubepods-burstable-pod8dae47af0ef9b7653c150050b2a22560.slice. May 17 00:21:36.386010 kubelet[2130]: E0517 00:21:36.385987 2130 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-233-222-141\" not found" node="172-233-222-141" May 17 00:21:36.389189 systemd[1]: Created slice kubepods-burstable-pod8f86168e71469d462e247ebcbe84bd42.slice - libcontainer container kubepods-burstable-pod8f86168e71469d462e247ebcbe84bd42.slice. May 17 00:21:36.391185 kubelet[2130]: E0517 00:21:36.391157 2130 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-233-222-141\" not found" node="172-233-222-141" May 17 00:21:36.395409 kubelet[2130]: I0517 00:21:36.395391 2130 kubelet_node_status.go:75] "Attempting to register node" node="172-233-222-141" May 17 00:21:36.396259 kubelet[2130]: E0517 00:21:36.395650 2130 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.233.222.141:6443/api/v1/nodes\": dial tcp 172.233.222.141:6443: connect: connection refused" node="172-233-222-141" May 17 00:21:36.397329 systemd[1]: Created slice kubepods-burstable-podd64f0221da9b54b48d067217365a9fbc.slice - libcontainer container kubepods-burstable-podd64f0221da9b54b48d067217365a9fbc.slice. May 17 00:21:36.399108 kubelet[2130]: E0517 00:21:36.398957 2130 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-233-222-141\" not found" node="172-233-222-141" May 17 00:21:36.424215 kubelet[2130]: I0517 00:21:36.424146 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8f86168e71469d462e247ebcbe84bd42-ca-certs\") pod \"kube-controller-manager-172-233-222-141\" (UID: \"8f86168e71469d462e247ebcbe84bd42\") " pod="kube-system/kube-controller-manager-172-233-222-141" May 17 00:21:36.424215 kubelet[2130]: I0517 00:21:36.424170 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8f86168e71469d462e247ebcbe84bd42-flexvolume-dir\") pod \"kube-controller-manager-172-233-222-141\" (UID: \"8f86168e71469d462e247ebcbe84bd42\") " pod="kube-system/kube-controller-manager-172-233-222-141" May 17 00:21:36.424215 kubelet[2130]: I0517 00:21:36.424185 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8f86168e71469d462e247ebcbe84bd42-kubeconfig\") pod \"kube-controller-manager-172-233-222-141\" (UID: \"8f86168e71469d462e247ebcbe84bd42\") " pod="kube-system/kube-controller-manager-172-233-222-141" May 17 00:21:36.424215 kubelet[2130]: I0517 00:21:36.424200 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8f86168e71469d462e247ebcbe84bd42-usr-share-ca-certificates\") pod \"kube-controller-manager-172-233-222-141\" (UID: \"8f86168e71469d462e247ebcbe84bd42\") " pod="kube-system/kube-controller-manager-172-233-222-141" May 17 00:21:36.424215 kubelet[2130]: I0517 00:21:36.424212 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8dae47af0ef9b7653c150050b2a22560-ca-certs\") pod \"kube-apiserver-172-233-222-141\" (UID: \"8dae47af0ef9b7653c150050b2a22560\") " pod="kube-system/kube-apiserver-172-233-222-141" May 17 00:21:36.424346 kubelet[2130]: I0517 00:21:36.424226 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8dae47af0ef9b7653c150050b2a22560-k8s-certs\") pod \"kube-apiserver-172-233-222-141\" (UID: \"8dae47af0ef9b7653c150050b2a22560\") " pod="kube-system/kube-apiserver-172-233-222-141" May 17 00:21:36.424346 kubelet[2130]: I0517 00:21:36.424240 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8dae47af0ef9b7653c150050b2a22560-usr-share-ca-certificates\") pod \"kube-apiserver-172-233-222-141\" (UID: \"8dae47af0ef9b7653c150050b2a22560\") " pod="kube-system/kube-apiserver-172-233-222-141" May 17 00:21:36.424346 kubelet[2130]: I0517 00:21:36.424261 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8f86168e71469d462e247ebcbe84bd42-k8s-certs\") pod \"kube-controller-manager-172-233-222-141\" (UID: \"8f86168e71469d462e247ebcbe84bd42\") " pod="kube-system/kube-controller-manager-172-233-222-141" May 17 00:21:36.424346 kubelet[2130]: I0517 00:21:36.424274 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d64f0221da9b54b48d067217365a9fbc-kubeconfig\") pod \"kube-scheduler-172-233-222-141\" (UID: \"d64f0221da9b54b48d067217365a9fbc\") " pod="kube-system/kube-scheduler-172-233-222-141" May 17 00:21:36.437914 kubelet[2130]: E0517 00:21:36.437864 2130 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.233.222.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-233-222-141?timeout=10s\": dial tcp 172.233.222.141:6443: connect: connection refused" interval="400ms" May 17 00:21:36.597750 kubelet[2130]: I0517 00:21:36.597723 2130 kubelet_node_status.go:75] "Attempting to register node" node="172-233-222-141" May 17 00:21:36.597915 kubelet[2130]: E0517 00:21:36.597894 2130 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.233.222.141:6443/api/v1/nodes\": dial tcp 172.233.222.141:6443: connect: connection refused" node="172-233-222-141" May 17 00:21:36.687313 kubelet[2130]: E0517 00:21:36.687198 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:21:36.688005 containerd[1467]: time="2025-05-17T00:21:36.687964824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-233-222-141,Uid:8dae47af0ef9b7653c150050b2a22560,Namespace:kube-system,Attempt:0,}" May 17 00:21:36.692085 kubelet[2130]: E0517 00:21:36.692063 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:21:36.692700 containerd[1467]: time="2025-05-17T00:21:36.692450643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-233-222-141,Uid:8f86168e71469d462e247ebcbe84bd42,Namespace:kube-system,Attempt:0,}" May 17 00:21:36.699787 kubelet[2130]: E0517 00:21:36.699757 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:21:36.700226 containerd[1467]: time="2025-05-17T00:21:36.700023087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-233-222-141,Uid:d64f0221da9b54b48d067217365a9fbc,Namespace:kube-system,Attempt:0,}" May 17 00:21:36.838991 kubelet[2130]: E0517 00:21:36.838929 2130 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.233.222.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-233-222-141?timeout=10s\": dial tcp 172.233.222.141:6443: connect: connection refused" interval="800ms" May 17 00:21:36.999356 kubelet[2130]: I0517 00:21:36.999264 2130 kubelet_node_status.go:75] "Attempting to register node" node="172-233-222-141" May 17 00:21:36.999651 kubelet[2130]: E0517 00:21:36.999476 2130 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.233.222.141:6443/api/v1/nodes\": dial tcp 172.233.222.141:6443: connect: connection refused" node="172-233-222-141" May 17 00:21:37.289339 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3203764017.mount: Deactivated successfully. May 17 00:21:37.296177 containerd[1467]: time="2025-05-17T00:21:37.294675273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:21:37.296177 containerd[1467]: time="2025-05-17T00:21:37.295403340Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:21:37.296177 containerd[1467]: time="2025-05-17T00:21:37.296121626Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:21:37.296177 containerd[1467]: time="2025-05-17T00:21:37.296150101Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 17 00:21:37.296469 containerd[1467]: time="2025-05-17T00:21:37.296425200Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:21:37.297372 containerd[1467]: time="2025-05-17T00:21:37.297284539Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:21:37.298853 containerd[1467]: time="2025-05-17T00:21:37.297531574Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:21:37.301185 containerd[1467]: time="2025-05-17T00:21:37.301146520Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:21:37.302262 containerd[1467]: time="2025-05-17T00:21:37.301850818Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 613.806393ms" May 17 00:21:37.302929 containerd[1467]: time="2025-05-17T00:21:37.302864267Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 602.798892ms" May 17 00:21:37.303166 containerd[1467]: time="2025-05-17T00:21:37.303132147Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 610.615873ms" May 17 00:21:37.379716 kubelet[2130]: E0517 00:21:37.379501 2130 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.233.222.141:6443/api/v1/namespaces/default/events\": dial tcp 172.233.222.141:6443: connect: connection refused" event="&Event{ObjectMeta:{172-233-222-141.18402899be8217a5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-233-222-141,UID:172-233-222-141,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-233-222-141,},FirstTimestamp:2025-05-17 00:21:36.211056549 +0000 UTC m=+0.954480360,LastTimestamp:2025-05-17 00:21:36.211056549 +0000 UTC m=+0.954480360,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-233-222-141,}" May 17 00:21:37.398169 containerd[1467]: time="2025-05-17T00:21:37.397075202Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:21:37.398169 containerd[1467]: time="2025-05-17T00:21:37.397127256Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:21:37.398169 containerd[1467]: time="2025-05-17T00:21:37.397145449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:37.398169 containerd[1467]: time="2025-05-17T00:21:37.397234488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:37.400821 containerd[1467]: time="2025-05-17T00:21:37.400734683Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:21:37.400871 containerd[1467]: time="2025-05-17T00:21:37.400831292Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:21:37.400907 containerd[1467]: time="2025-05-17T00:21:37.400882465Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:37.401218 containerd[1467]: time="2025-05-17T00:21:37.400988015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:37.402425 containerd[1467]: time="2025-05-17T00:21:37.402118229Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:21:37.403164 containerd[1467]: time="2025-05-17T00:21:37.402966334Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:21:37.403211 containerd[1467]: time="2025-05-17T00:21:37.403187786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:37.403580 containerd[1467]: time="2025-05-17T00:21:37.403454605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:37.420843 systemd[1]: Started cri-containerd-41385d71a49a2d5e4faa86cc22c7bfa2780b6ed629964a742f450484851dca1d.scope - libcontainer container 41385d71a49a2d5e4faa86cc22c7bfa2780b6ed629964a742f450484851dca1d. May 17 00:21:37.427305 systemd[1]: Started cri-containerd-8c7f7fd50fb7c44771ffe747a156838280d31f81e6448c1216b0b705a6ed2ae9.scope - libcontainer container 8c7f7fd50fb7c44771ffe747a156838280d31f81e6448c1216b0b705a6ed2ae9. May 17 00:21:37.429204 kubelet[2130]: E0517 00:21:37.429166 2130 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.233.222.141:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.233.222.141:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 17 00:21:37.443900 systemd[1]: Started cri-containerd-703fb2d7ea96c27717081752002a0a43297f268548c87656d5ec6858ef9dfdc6.scope - libcontainer container 703fb2d7ea96c27717081752002a0a43297f268548c87656d5ec6858ef9dfdc6. May 17 00:21:37.493549 containerd[1467]: time="2025-05-17T00:21:37.493515285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-233-222-141,Uid:8f86168e71469d462e247ebcbe84bd42,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c7f7fd50fb7c44771ffe747a156838280d31f81e6448c1216b0b705a6ed2ae9\"" May 17 00:21:37.495425 kubelet[2130]: E0517 00:21:37.495394 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:21:37.495912 containerd[1467]: time="2025-05-17T00:21:37.495881341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-233-222-141,Uid:8dae47af0ef9b7653c150050b2a22560,Namespace:kube-system,Attempt:0,} returns sandbox id \"41385d71a49a2d5e4faa86cc22c7bfa2780b6ed629964a742f450484851dca1d\"" May 17 00:21:37.497232 kubelet[2130]: E0517 00:21:37.496622 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:21:37.504077 containerd[1467]: time="2025-05-17T00:21:37.503940154Z" level=info msg="CreateContainer within sandbox \"8c7f7fd50fb7c44771ffe747a156838280d31f81e6448c1216b0b705a6ed2ae9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 00:21:37.505101 containerd[1467]: time="2025-05-17T00:21:37.505069206Z" level=info msg="CreateContainer within sandbox \"41385d71a49a2d5e4faa86cc22c7bfa2780b6ed629964a742f450484851dca1d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 00:21:37.506137 kubelet[2130]: E0517 00:21:37.506039 2130 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.233.222.141:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.233.222.141:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 17 00:21:37.514319 containerd[1467]: time="2025-05-17T00:21:37.514287950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-233-222-141,Uid:d64f0221da9b54b48d067217365a9fbc,Namespace:kube-system,Attempt:0,} returns sandbox id \"703fb2d7ea96c27717081752002a0a43297f268548c87656d5ec6858ef9dfdc6\"" May 17 00:21:37.514824 kubelet[2130]: E0517 00:21:37.514775 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:21:37.517944 containerd[1467]: time="2025-05-17T00:21:37.517916151Z" level=info msg="CreateContainer within sandbox \"703fb2d7ea96c27717081752002a0a43297f268548c87656d5ec6858ef9dfdc6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 00:21:37.519380 containerd[1467]: time="2025-05-17T00:21:37.519353663Z" level=info msg="CreateContainer within sandbox \"8c7f7fd50fb7c44771ffe747a156838280d31f81e6448c1216b0b705a6ed2ae9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"945ecdf17bb1fce3b04cd0205dc36e68cd9d132d8d6bf13605982b68385fdfc4\"" May 17 00:21:37.520742 containerd[1467]: time="2025-05-17T00:21:37.520009982Z" level=info msg="StartContainer for \"945ecdf17bb1fce3b04cd0205dc36e68cd9d132d8d6bf13605982b68385fdfc4\"" May 17 00:21:37.522977 containerd[1467]: time="2025-05-17T00:21:37.522935909Z" level=info msg="CreateContainer within sandbox \"41385d71a49a2d5e4faa86cc22c7bfa2780b6ed629964a742f450484851dca1d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0dfa8372a90a6735a92a3b5b452ec3b509ca7b2496502ca9b38ca8d173ed3554\"" May 17 00:21:37.523620 containerd[1467]: time="2025-05-17T00:21:37.523583256Z" level=info msg="StartContainer for \"0dfa8372a90a6735a92a3b5b452ec3b509ca7b2496502ca9b38ca8d173ed3554\"" May 17 00:21:37.535705 containerd[1467]: time="2025-05-17T00:21:37.535656188Z" level=info msg="CreateContainer within sandbox \"703fb2d7ea96c27717081752002a0a43297f268548c87656d5ec6858ef9dfdc6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5d88a8ea0592166c07c03327da9930de5a1d7faba3c5e603d59821f7f228552e\"" May 17 00:21:37.536815 containerd[1467]: time="2025-05-17T00:21:37.536791888Z" level=info msg="StartContainer for \"5d88a8ea0592166c07c03327da9930de5a1d7faba3c5e603d59821f7f228552e\"" May 17 00:21:37.559821 systemd[1]: Started cri-containerd-0dfa8372a90a6735a92a3b5b452ec3b509ca7b2496502ca9b38ca8d173ed3554.scope - libcontainer container 0dfa8372a90a6735a92a3b5b452ec3b509ca7b2496502ca9b38ca8d173ed3554. May 17 00:21:37.563829 systemd[1]: Started cri-containerd-945ecdf17bb1fce3b04cd0205dc36e68cd9d132d8d6bf13605982b68385fdfc4.scope - libcontainer container 945ecdf17bb1fce3b04cd0205dc36e68cd9d132d8d6bf13605982b68385fdfc4. May 17 00:21:37.580807 systemd[1]: Started cri-containerd-5d88a8ea0592166c07c03327da9930de5a1d7faba3c5e603d59821f7f228552e.scope - libcontainer container 5d88a8ea0592166c07c03327da9930de5a1d7faba3c5e603d59821f7f228552e. May 17 00:21:37.624051 containerd[1467]: time="2025-05-17T00:21:37.623964397Z" level=info msg="StartContainer for \"945ecdf17bb1fce3b04cd0205dc36e68cd9d132d8d6bf13605982b68385fdfc4\" returns successfully" May 17 00:21:37.633186 containerd[1467]: time="2025-05-17T00:21:37.633064044Z" level=info msg="StartContainer for \"0dfa8372a90a6735a92a3b5b452ec3b509ca7b2496502ca9b38ca8d173ed3554\" returns successfully" May 17 00:21:37.639969 kubelet[2130]: E0517 00:21:37.639919 2130 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.233.222.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-233-222-141?timeout=10s\": dial tcp 172.233.222.141:6443: connect: connection refused" interval="1.6s" May 17 00:21:37.663223 containerd[1467]: time="2025-05-17T00:21:37.663181406Z" level=info msg="StartContainer for \"5d88a8ea0592166c07c03327da9930de5a1d7faba3c5e603d59821f7f228552e\" returns successfully" May 17 00:21:37.729001 kubelet[2130]: E0517 00:21:37.728968 2130 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.233.222.141:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-233-222-141&limit=500&resourceVersion=0\": dial tcp 172.233.222.141:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 17 00:21:37.802084 kubelet[2130]: I0517 00:21:37.801605 2130 kubelet_node_status.go:75] "Attempting to register node" node="172-233-222-141" May 17 00:21:38.266185 kubelet[2130]: E0517 00:21:38.266151 2130 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-233-222-141\" not found" node="172-233-222-141" May 17 00:21:38.266322 kubelet[2130]: E0517 00:21:38.266298 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:21:38.276981 kubelet[2130]: E0517 00:21:38.275060 2130 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-233-222-141\" not found" node="172-233-222-141" May 17 00:21:38.276981 kubelet[2130]: E0517 00:21:38.275165 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:21:38.283485 kubelet[2130]: E0517 00:21:38.283463 2130 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-233-222-141\" not found" node="172-233-222-141" May 17 00:21:38.283575 kubelet[2130]: E0517 00:21:38.283549 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:21:39.039614 kubelet[2130]: I0517 00:21:39.039563 2130 kubelet_node_status.go:78] "Successfully registered node" node="172-233-222-141" May 17 00:21:39.040905 kubelet[2130]: E0517 00:21:39.039968 2130 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"172-233-222-141\": node \"172-233-222-141\" not found" May 17 00:21:39.065860 kubelet[2130]: E0517 00:21:39.065813 2130 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-233-222-141\" not found" May 17 00:21:39.166533 kubelet[2130]: E0517 00:21:39.166509 2130 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-233-222-141\" not found" May 17 00:21:39.267127 kubelet[2130]: E0517 00:21:39.267093 2130 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-233-222-141\" not found" May 17 00:21:39.279621 kubelet[2130]: E0517 00:21:39.279592 2130 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-233-222-141\" not found" node="172-233-222-141" May 17 00:21:39.279960 kubelet[2130]: E0517 00:21:39.279755 2130 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-233-222-141\" not found" node="172-233-222-141" May 17 00:21:39.279960 kubelet[2130]: E0517 00:21:39.279845 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:21:39.279960 kubelet[2130]: E0517 00:21:39.279900 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:21:39.367403 kubelet[2130]: E0517 00:21:39.367313 2130 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-233-222-141\" not found" May 17 00:21:39.468076 kubelet[2130]: E0517 00:21:39.468044 2130 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-233-222-141\" not found" May 17 00:21:39.568865 kubelet[2130]: E0517 00:21:39.568841 2130 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-233-222-141\" not found" May 17 00:21:39.669617 kubelet[2130]: E0517 00:21:39.669559 2130 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-233-222-141\" not found" May 17 00:21:39.769749 kubelet[2130]: E0517 00:21:39.769703 2130 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-233-222-141\" not found" May 17 00:21:39.870493 kubelet[2130]: E0517 00:21:39.870464 2130 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-233-222-141\" not found" May 17 00:21:39.971098 kubelet[2130]: E0517 00:21:39.970843 2130 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-233-222-141\" not found" May 17 00:21:40.071766 kubelet[2130]: E0517 00:21:40.071723 2130 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-233-222-141\" not found" May 17 00:21:40.172705 kubelet[2130]: E0517 00:21:40.172640 2130 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-233-222-141\" not found" May 17 00:21:40.273343 kubelet[2130]: E0517 00:21:40.273249 2130 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-233-222-141\" not found" May 17 00:21:40.373555 kubelet[2130]: E0517 00:21:40.373507 2130 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-233-222-141\" not found" May 17 00:21:40.430688 kubelet[2130]: I0517 00:21:40.430627 2130 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-233-222-141" May 17 00:21:40.438052 kubelet[2130]: I0517 00:21:40.437714 2130 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-233-222-141" May 17 00:21:40.441103 kubelet[2130]: I0517 00:21:40.440948 2130 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-233-222-141" May 17 00:21:40.698961 systemd[1]: Reloading requested from client PID 2414 ('systemctl') (unit session-7.scope)... May 17 00:21:40.698979 systemd[1]: Reloading... May 17 00:21:40.782709 zram_generator::config[2454]: No configuration found. May 17 00:21:40.891931 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:21:40.964349 systemd[1]: Reloading finished in 265 ms. May 17 00:21:41.009301 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:21:41.018467 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:21:41.018773 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:21:41.018819 systemd[1]: kubelet.service: Consumed 1.282s CPU time, 129.7M memory peak, 0B memory swap peak. May 17 00:21:41.025951 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:21:41.170872 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:21:41.175918 (kubelet)[2505]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:21:41.221459 kubelet[2505]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:21:41.221459 kubelet[2505]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 00:21:41.221459 kubelet[2505]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:21:41.221873 kubelet[2505]: I0517 00:21:41.221439 2505 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:21:41.229263 kubelet[2505]: I0517 00:21:41.229224 2505 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 17 00:21:41.229263 kubelet[2505]: I0517 00:21:41.229251 2505 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:21:41.229486 kubelet[2505]: I0517 00:21:41.229462 2505 server.go:956] "Client rotation is on, will bootstrap in background" May 17 00:21:41.230620 kubelet[2505]: I0517 00:21:41.230595 2505 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" May 17 00:21:41.232917 kubelet[2505]: I0517 00:21:41.232606 2505 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:21:41.236435 kubelet[2505]: E0517 00:21:41.236394 2505 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:21:41.236435 kubelet[2505]: I0517 00:21:41.236417 2505 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:21:41.241681 kubelet[2505]: I0517 00:21:41.239520 2505 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:21:41.241681 kubelet[2505]: I0517 00:21:41.239781 2505 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:21:41.241681 kubelet[2505]: I0517 00:21:41.239800 2505 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-233-222-141","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:21:41.241681 kubelet[2505]: I0517 00:21:41.240047 2505 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:21:41.241853 kubelet[2505]: I0517 00:21:41.240056 2505 container_manager_linux.go:303] "Creating device plugin manager" May 17 00:21:41.241853 kubelet[2505]: I0517 00:21:41.240099 2505 state_mem.go:36] "Initialized new in-memory state store" May 17 00:21:41.241853 kubelet[2505]: I0517 00:21:41.240269 2505 kubelet.go:480] "Attempting to sync node with API server" May 17 00:21:41.241853 kubelet[2505]: I0517 00:21:41.240284 2505 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:21:41.241853 kubelet[2505]: I0517 00:21:41.240305 2505 kubelet.go:386] "Adding apiserver pod source" May 17 00:21:41.241853 kubelet[2505]: I0517 00:21:41.240322 2505 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:21:41.249075 kubelet[2505]: I0517 00:21:41.249053 2505 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:21:41.249514 kubelet[2505]: I0517 00:21:41.249500 2505 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 17 00:21:41.252133 kubelet[2505]: I0517 00:21:41.252093 2505 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 00:21:41.252222 kubelet[2505]: I0517 00:21:41.252211 2505 server.go:1289] "Started kubelet" May 17 00:21:41.253770 kubelet[2505]: I0517 00:21:41.253725 2505 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:21:41.254046 kubelet[2505]: I0517 00:21:41.254017 2505 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:21:41.258248 kubelet[2505]: I0517 00:21:41.258222 2505 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:21:41.259625 kubelet[2505]: I0517 00:21:41.259598 2505 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:21:41.260518 kubelet[2505]: I0517 00:21:41.260504 2505 server.go:317] "Adding debug handlers to kubelet server" May 17 00:21:41.261308 kubelet[2505]: I0517 00:21:41.261292 2505 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:21:41.264365 kubelet[2505]: E0517 00:21:41.264349 2505 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:21:41.265572 kubelet[2505]: I0517 00:21:41.265175 2505 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 00:21:41.265736 kubelet[2505]: I0517 00:21:41.265721 2505 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 00:21:41.265983 kubelet[2505]: I0517 00:21:41.265964 2505 reconciler.go:26] "Reconciler: start to sync state" May 17 00:21:41.267160 kubelet[2505]: I0517 00:21:41.267145 2505 factory.go:223] Registration of the systemd container factory successfully May 17 00:21:41.267323 kubelet[2505]: I0517 00:21:41.267305 2505 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:21:41.269790 kubelet[2505]: I0517 00:21:41.269777 2505 factory.go:223] Registration of the containerd container factory successfully May 17 00:21:41.271868 kubelet[2505]: I0517 00:21:41.271835 2505 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 17 00:21:41.272984 kubelet[2505]: I0517 00:21:41.272950 2505 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 17 00:21:41.272984 kubelet[2505]: I0517 00:21:41.272972 2505 status_manager.go:230] "Starting to sync pod status with apiserver" May 17 00:21:41.272984 kubelet[2505]: I0517 00:21:41.272987 2505 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 00:21:41.273069 kubelet[2505]: I0517 00:21:41.272994 2505 kubelet.go:2436] "Starting kubelet main sync loop" May 17 00:21:41.273069 kubelet[2505]: E0517 00:21:41.273036 2505 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:21:41.319262 kubelet[2505]: I0517 00:21:41.319228 2505 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 00:21:41.319262 kubelet[2505]: I0517 00:21:41.319245 2505 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 00:21:41.319262 kubelet[2505]: I0517 00:21:41.319262 2505 state_mem.go:36] "Initialized new in-memory state store" May 17 00:21:41.319389 kubelet[2505]: I0517 00:21:41.319366 2505 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 00:21:41.319428 kubelet[2505]: I0517 00:21:41.319387 2505 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 00:21:41.319428 kubelet[2505]: I0517 00:21:41.319404 2505 policy_none.go:49] "None policy: Start" May 17 00:21:41.319428 kubelet[2505]: I0517 00:21:41.319413 2505 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 00:21:41.319428 kubelet[2505]: I0517 00:21:41.319422 2505 state_mem.go:35] "Initializing new in-memory state store" May 17 00:21:41.319553 kubelet[2505]: I0517 00:21:41.319536 2505 state_mem.go:75] "Updated machine memory state" May 17 00:21:41.323494 kubelet[2505]: E0517 00:21:41.323472 2505 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 17 00:21:41.324419 kubelet[2505]: I0517 00:21:41.323860 2505 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:21:41.324419 kubelet[2505]: I0517 00:21:41.323874 2505 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:21:41.324853 kubelet[2505]: I0517 00:21:41.324839 2505 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:21:41.325843 kubelet[2505]: E0517 00:21:41.325827 2505 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 00:21:41.373643 kubelet[2505]: I0517 00:21:41.373627 2505 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-233-222-141" May 17 00:21:41.373917 kubelet[2505]: I0517 00:21:41.373878 2505 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-233-222-141" May 17 00:21:41.374045 kubelet[2505]: I0517 00:21:41.373702 2505 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-233-222-141" May 17 00:21:41.379491 kubelet[2505]: E0517 00:21:41.379464 2505 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-233-222-141\" already exists" pod="kube-system/kube-scheduler-172-233-222-141" May 17 00:21:41.380580 kubelet[2505]: E0517 00:21:41.380515 2505 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-233-222-141\" already exists" pod="kube-system/kube-controller-manager-172-233-222-141" May 17 00:21:41.380580 kubelet[2505]: E0517 00:21:41.380540 2505 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-233-222-141\" already exists" pod="kube-system/kube-apiserver-172-233-222-141" May 17 00:21:41.428747 kubelet[2505]: I0517 00:21:41.428726 2505 kubelet_node_status.go:75] "Attempting to register node" node="172-233-222-141" May 17 00:21:41.434078 kubelet[2505]: I0517 00:21:41.434036 2505 kubelet_node_status.go:124] "Node was previously registered" node="172-233-222-141" May 17 00:21:41.434120 kubelet[2505]: I0517 00:21:41.434096 2505 kubelet_node_status.go:78] "Successfully registered node" node="172-233-222-141" May 17 00:21:41.506495 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 17 00:21:41.566305 kubelet[2505]: I0517 00:21:41.566280 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8f86168e71469d462e247ebcbe84bd42-flexvolume-dir\") pod \"kube-controller-manager-172-233-222-141\" (UID: \"8f86168e71469d462e247ebcbe84bd42\") " pod="kube-system/kube-controller-manager-172-233-222-141" May 17 00:21:41.566390 kubelet[2505]: I0517 00:21:41.566311 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8f86168e71469d462e247ebcbe84bd42-k8s-certs\") pod \"kube-controller-manager-172-233-222-141\" (UID: \"8f86168e71469d462e247ebcbe84bd42\") " pod="kube-system/kube-controller-manager-172-233-222-141" May 17 00:21:41.566390 kubelet[2505]: I0517 00:21:41.566330 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8f86168e71469d462e247ebcbe84bd42-kubeconfig\") pod \"kube-controller-manager-172-233-222-141\" (UID: \"8f86168e71469d462e247ebcbe84bd42\") " pod="kube-system/kube-controller-manager-172-233-222-141" May 17 00:21:41.566390 kubelet[2505]: I0517 00:21:41.566351 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d64f0221da9b54b48d067217365a9fbc-kubeconfig\") pod \"kube-scheduler-172-233-222-141\" (UID: \"d64f0221da9b54b48d067217365a9fbc\") " pod="kube-system/kube-scheduler-172-233-222-141" May 17 00:21:41.566390 kubelet[2505]: I0517 00:21:41.566365 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8dae47af0ef9b7653c150050b2a22560-ca-certs\") pod \"kube-apiserver-172-233-222-141\" (UID: \"8dae47af0ef9b7653c150050b2a22560\") " pod="kube-system/kube-apiserver-172-233-222-141" May 17 00:21:41.566390 kubelet[2505]: I0517 00:21:41.566381 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8f86168e71469d462e247ebcbe84bd42-usr-share-ca-certificates\") pod \"kube-controller-manager-172-233-222-141\" (UID: \"8f86168e71469d462e247ebcbe84bd42\") " pod="kube-system/kube-controller-manager-172-233-222-141" May 17 00:21:41.566513 kubelet[2505]: I0517 00:21:41.566396 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8dae47af0ef9b7653c150050b2a22560-k8s-certs\") pod \"kube-apiserver-172-233-222-141\" (UID: \"8dae47af0ef9b7653c150050b2a22560\") " pod="kube-system/kube-apiserver-172-233-222-141" May 17 00:21:41.566513 kubelet[2505]: I0517 00:21:41.566411 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8dae47af0ef9b7653c150050b2a22560-usr-share-ca-certificates\") pod \"kube-apiserver-172-233-222-141\" (UID: \"8dae47af0ef9b7653c150050b2a22560\") " pod="kube-system/kube-apiserver-172-233-222-141" May 17 00:21:41.566513 kubelet[2505]: I0517 00:21:41.566425 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8f86168e71469d462e247ebcbe84bd42-ca-certs\") pod \"kube-controller-manager-172-233-222-141\" (UID: \"8f86168e71469d462e247ebcbe84bd42\") " pod="kube-system/kube-controller-manager-172-233-222-141" May 17 00:21:41.680995 kubelet[2505]: E0517 00:21:41.680723 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:21:41.680995 kubelet[2505]: E0517 00:21:41.680781 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:21:41.680995 kubelet[2505]: E0517 00:21:41.680883 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:21:42.243002 kubelet[2505]: I0517 00:21:42.242738 2505 apiserver.go:52] "Watching apiserver" May 17 00:21:42.266596 kubelet[2505]: I0517 00:21:42.266558 2505 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 17 00:21:42.279226 kubelet[2505]: I0517 00:21:42.279123 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-233-222-141" podStartSLOduration=2.2791121739999998 podStartE2EDuration="2.279112174s" podCreationTimestamp="2025-05-17 00:21:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:21:42.27861561 +0000 UTC m=+1.098521609" watchObservedRunningTime="2025-05-17 00:21:42.279112174 +0000 UTC m=+1.099018173" May 17 00:21:42.293469 kubelet[2505]: I0517 00:21:42.293366 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-233-222-141" podStartSLOduration=2.29335838 podStartE2EDuration="2.29335838s" podCreationTimestamp="2025-05-17 00:21:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:21:42.286885774 +0000 UTC m=+1.106791793" watchObservedRunningTime="2025-05-17 00:21:42.29335838 +0000 UTC m=+1.113264379" May 17 00:21:42.303246 kubelet[2505]: I0517 00:21:42.302854 2505 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-233-222-141" May 17 00:21:42.303568 kubelet[2505]: I0517 00:21:42.303555 2505 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-233-222-141" May 17 00:21:42.305258 kubelet[2505]: E0517 00:21:42.305230 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:21:42.311460 kubelet[2505]: E0517 00:21:42.311446 2505 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-233-222-141\" already exists" pod="kube-system/kube-apiserver-172-233-222-141" May 17 00:21:42.311682 kubelet[2505]: E0517 00:21:42.311642 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:21:42.312346 kubelet[2505]: E0517 00:21:42.312331 2505 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-233-222-141\" already exists" pod="kube-system/kube-controller-manager-172-233-222-141" May 17 00:21:42.312537 kubelet[2505]: E0517 00:21:42.312524 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:21:42.318573 kubelet[2505]: I0517 00:21:42.318547 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-233-222-141" podStartSLOduration=2.318538925 podStartE2EDuration="2.318538925s" podCreationTimestamp="2025-05-17 00:21:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:21:42.293861469 +0000 UTC m=+1.113767468" watchObservedRunningTime="2025-05-17 00:21:42.318538925 +0000 UTC m=+1.138444924" May 17 00:21:43.304755 kubelet[2505]: E0517 00:21:43.304451 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:21:43.304755 kubelet[2505]: E0517 00:21:43.304458 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:21:43.304755 kubelet[2505]: E0517 00:21:43.304647 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:21:44.305544 kubelet[2505]: E0517 00:21:44.305504 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:21:47.198457 kubelet[2505]: E0517 00:21:47.198429 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:21:47.309570 kubelet[2505]: E0517 00:21:47.309045 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:21:47.887704 kubelet[2505]: I0517 00:21:47.887655 2505 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 00:21:47.888011 containerd[1467]: time="2025-05-17T00:21:47.887951176Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:21:47.888309 kubelet[2505]: I0517 00:21:47.888127 2505 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 00:21:48.329530 systemd[1]: Created slice kubepods-besteffort-pod3c1c0281_d27d_497d_8fdd_82717482a87d.slice - libcontainer container kubepods-besteffort-pod3c1c0281_d27d_497d_8fdd_82717482a87d.slice. May 17 00:21:48.414764 kubelet[2505]: I0517 00:21:48.414659 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffw2b\" (UniqueName: \"kubernetes.io/projected/3c1c0281-d27d-497d-8fdd-82717482a87d-kube-api-access-ffw2b\") pod \"kube-proxy-jbpfb\" (UID: \"3c1c0281-d27d-497d-8fdd-82717482a87d\") " pod="kube-system/kube-proxy-jbpfb" May 17 00:21:48.414764 kubelet[2505]: I0517 00:21:48.414752 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c1c0281-d27d-497d-8fdd-82717482a87d-xtables-lock\") pod \"kube-proxy-jbpfb\" (UID: \"3c1c0281-d27d-497d-8fdd-82717482a87d\") " pod="kube-system/kube-proxy-jbpfb" May 17 00:21:48.415138 kubelet[2505]: I0517 00:21:48.414772 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3c1c0281-d27d-497d-8fdd-82717482a87d-kube-proxy\") pod \"kube-proxy-jbpfb\" (UID: \"3c1c0281-d27d-497d-8fdd-82717482a87d\") " pod="kube-system/kube-proxy-jbpfb" May 17 00:21:48.415138 kubelet[2505]: I0517 00:21:48.414788 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c1c0281-d27d-497d-8fdd-82717482a87d-lib-modules\") pod \"kube-proxy-jbpfb\" (UID: \"3c1c0281-d27d-497d-8fdd-82717482a87d\") " pod="kube-system/kube-proxy-jbpfb" May 17 00:21:48.518984 kubelet[2505]: E0517 00:21:48.518953 2505 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 17 00:21:48.518984 kubelet[2505]: E0517 00:21:48.518976 2505 projected.go:194] Error preparing data for projected volume kube-api-access-ffw2b for pod kube-system/kube-proxy-jbpfb: configmap "kube-root-ca.crt" not found May 17 00:21:48.519130 kubelet[2505]: E0517 00:21:48.519027 2505 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3c1c0281-d27d-497d-8fdd-82717482a87d-kube-api-access-ffw2b podName:3c1c0281-d27d-497d-8fdd-82717482a87d nodeName:}" failed. No retries permitted until 2025-05-17 00:21:49.019009344 +0000 UTC m=+7.838915343 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ffw2b" (UniqueName: "kubernetes.io/projected/3c1c0281-d27d-497d-8fdd-82717482a87d-kube-api-access-ffw2b") pod "kube-proxy-jbpfb" (UID: "3c1c0281-d27d-497d-8fdd-82717482a87d") : configmap "kube-root-ca.crt" not found May 17 00:21:48.949601 systemd[1]: Created slice kubepods-besteffort-pod0055322b_6a17_4e92_a0e1_acb2e4030392.slice - libcontainer container kubepods-besteffort-pod0055322b_6a17_4e92_a0e1_acb2e4030392.slice. May 17 00:21:49.017778 kubelet[2505]: I0517 00:21:49.017728 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx4z9\" (UniqueName: \"kubernetes.io/projected/0055322b-6a17-4e92-a0e1-acb2e4030392-kube-api-access-vx4z9\") pod \"tigera-operator-844669ff44-nq87f\" (UID: \"0055322b-6a17-4e92-a0e1-acb2e4030392\") " pod="tigera-operator/tigera-operator-844669ff44-nq87f" May 17 00:21:49.017778 kubelet[2505]: I0517 00:21:49.017775 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0055322b-6a17-4e92-a0e1-acb2e4030392-var-lib-calico\") pod \"tigera-operator-844669ff44-nq87f\" (UID: \"0055322b-6a17-4e92-a0e1-acb2e4030392\") " pod="tigera-operator/tigera-operator-844669ff44-nq87f" May 17 00:21:49.238588 kubelet[2505]: E0517 00:21:49.237582 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:21:49.238973 containerd[1467]: time="2025-05-17T00:21:49.238086658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jbpfb,Uid:3c1c0281-d27d-497d-8fdd-82717482a87d,Namespace:kube-system,Attempt:0,}" May 17 00:21:49.253245 containerd[1467]: time="2025-05-17T00:21:49.252965443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-nq87f,Uid:0055322b-6a17-4e92-a0e1-acb2e4030392,Namespace:tigera-operator,Attempt:0,}" May 17 00:21:49.261543 containerd[1467]: time="2025-05-17T00:21:49.260760132Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:21:49.261543 containerd[1467]: time="2025-05-17T00:21:49.261358388Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:21:49.261543 containerd[1467]: time="2025-05-17T00:21:49.261372035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:49.261697 containerd[1467]: time="2025-05-17T00:21:49.261479812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:49.283027 containerd[1467]: time="2025-05-17T00:21:49.282805606Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:21:49.283027 containerd[1467]: time="2025-05-17T00:21:49.282877805Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:21:49.283027 containerd[1467]: time="2025-05-17T00:21:49.282888520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:49.283027 containerd[1467]: time="2025-05-17T00:21:49.282965001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:49.289042 systemd[1]: Started cri-containerd-711f4435a514c2a4792fb405143fa1ae6dc3454efdc05f821051cb449512d916.scope - libcontainer container 711f4435a514c2a4792fb405143fa1ae6dc3454efdc05f821051cb449512d916. May 17 00:21:49.309811 systemd[1]: Started cri-containerd-51a1ede06909d0f247074c2ef7565116bd96394d5bac9aa18f1ab56d698dcbc8.scope - libcontainer container 51a1ede06909d0f247074c2ef7565116bd96394d5bac9aa18f1ab56d698dcbc8. May 17 00:21:49.328286 containerd[1467]: time="2025-05-17T00:21:49.328237060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jbpfb,Uid:3c1c0281-d27d-497d-8fdd-82717482a87d,Namespace:kube-system,Attempt:0,} returns sandbox id \"711f4435a514c2a4792fb405143fa1ae6dc3454efdc05f821051cb449512d916\"" May 17 00:21:49.329384 kubelet[2505]: E0517 00:21:49.329005 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:21:49.334823 containerd[1467]: time="2025-05-17T00:21:49.334781730Z" level=info msg="CreateContainer within sandbox \"711f4435a514c2a4792fb405143fa1ae6dc3454efdc05f821051cb449512d916\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:21:49.348735 containerd[1467]: time="2025-05-17T00:21:49.348599525Z" level=info msg="CreateContainer within sandbox \"711f4435a514c2a4792fb405143fa1ae6dc3454efdc05f821051cb449512d916\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"62e7437b8177e85f51e0a5972790d4867225ce4420e8a074524005a567bab205\"" May 17 00:21:49.349696 containerd[1467]: time="2025-05-17T00:21:49.349643656Z" level=info msg="StartContainer for \"62e7437b8177e85f51e0a5972790d4867225ce4420e8a074524005a567bab205\"" May 17 00:21:49.355555 containerd[1467]: time="2025-05-17T00:21:49.355500234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-nq87f,Uid:0055322b-6a17-4e92-a0e1-acb2e4030392,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"51a1ede06909d0f247074c2ef7565116bd96394d5bac9aa18f1ab56d698dcbc8\"" May 17 00:21:49.356881 containerd[1467]: time="2025-05-17T00:21:49.356832046Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 17 00:21:49.385791 systemd[1]: Started cri-containerd-62e7437b8177e85f51e0a5972790d4867225ce4420e8a074524005a567bab205.scope - libcontainer container 62e7437b8177e85f51e0a5972790d4867225ce4420e8a074524005a567bab205. May 17 00:21:49.415751 containerd[1467]: time="2025-05-17T00:21:49.415718754Z" level=info msg="StartContainer for \"62e7437b8177e85f51e0a5972790d4867225ce4420e8a074524005a567bab205\" returns successfully" May 17 00:21:50.291000 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1506855215.mount: Deactivated successfully. May 17 00:21:50.316435 kubelet[2505]: E0517 00:21:50.316390 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:21:50.327860 kubelet[2505]: I0517 00:21:50.327709 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jbpfb" podStartSLOduration=2.32762459 podStartE2EDuration="2.32762459s" podCreationTimestamp="2025-05-17 00:21:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:21:50.3271237 +0000 UTC m=+9.147029699" watchObservedRunningTime="2025-05-17 00:21:50.32762459 +0000 UTC m=+9.147530589" May 17 00:21:50.715977 containerd[1467]: time="2025-05-17T00:21:50.715882724Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:50.717023 containerd[1467]: time="2025-05-17T00:21:50.716980522Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=25055451" May 17 00:21:50.717428 containerd[1467]: time="2025-05-17T00:21:50.717395409Z" level=info msg="ImageCreate event name:\"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:50.718954 containerd[1467]: time="2025-05-17T00:21:50.718886804Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:50.720125 containerd[1467]: time="2025-05-17T00:21:50.719694827Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"25051446\" in 1.362832605s" May 17 00:21:50.720125 containerd[1467]: time="2025-05-17T00:21:50.719732515Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\"" May 17 00:21:50.722912 containerd[1467]: time="2025-05-17T00:21:50.722883559Z" level=info msg="CreateContainer within sandbox \"51a1ede06909d0f247074c2ef7565116bd96394d5bac9aa18f1ab56d698dcbc8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 17 00:21:50.736372 containerd[1467]: time="2025-05-17T00:21:50.736345098Z" level=info msg="CreateContainer within sandbox \"51a1ede06909d0f247074c2ef7565116bd96394d5bac9aa18f1ab56d698dcbc8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d4af7d18744cf57c8298090bd86b9f43cc559349f3df38180164982113a9a9a0\"" May 17 00:21:50.737266 containerd[1467]: time="2025-05-17T00:21:50.736938115Z" level=info msg="StartContainer for \"d4af7d18744cf57c8298090bd86b9f43cc559349f3df38180164982113a9a9a0\"" May 17 00:21:50.769780 systemd[1]: Started cri-containerd-d4af7d18744cf57c8298090bd86b9f43cc559349f3df38180164982113a9a9a0.scope - libcontainer container d4af7d18744cf57c8298090bd86b9f43cc559349f3df38180164982113a9a9a0. May 17 00:21:50.794449 containerd[1467]: time="2025-05-17T00:21:50.794409313Z" level=info msg="StartContainer for \"d4af7d18744cf57c8298090bd86b9f43cc559349f3df38180164982113a9a9a0\" returns successfully" May 17 00:21:51.320012 kubelet[2505]: E0517 00:21:51.319354 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:21:51.335822 kubelet[2505]: I0517 00:21:51.335752 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-844669ff44-nq87f" podStartSLOduration=1.971621871 podStartE2EDuration="3.335737757s" podCreationTimestamp="2025-05-17 00:21:48 +0000 UTC" firstStartedPulling="2025-05-17 00:21:49.356501312 +0000 UTC m=+8.176407311" lastFinishedPulling="2025-05-17 00:21:50.720617198 +0000 UTC m=+9.540523197" observedRunningTime="2025-05-17 00:21:51.328209007 +0000 UTC m=+10.148115006" watchObservedRunningTime="2025-05-17 00:21:51.335737757 +0000 UTC m=+10.155643756" May 17 00:21:52.320695 kubelet[2505]: E0517 00:21:52.320313 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:21:52.390708 kubelet[2505]: E0517 00:21:52.390319 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:21:53.322427 kubelet[2505]: E0517 00:21:53.322371 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:21:55.864686 update_engine[1451]: I20250517 00:21:55.864605 1451 update_attempter.cc:509] Updating boot flags... May 17 00:21:55.906717 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2887) May 17 00:21:56.015756 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2886) May 17 00:21:56.180728 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2886) May 17 00:21:56.191821 sudo[1686]: pam_unix(sudo:session): session closed for user root May 17 00:21:56.250597 sshd[1683]: pam_unix(sshd:session): session closed for user core May 17 00:21:56.266618 systemd[1]: sshd@6-172.233.222.141:22-139.178.89.65:51816.service: Deactivated successfully. May 17 00:21:56.270644 systemd[1]: session-7.scope: Deactivated successfully. May 17 00:21:56.271726 systemd[1]: session-7.scope: Consumed 4.476s CPU time, 157.9M memory peak, 0B memory swap peak. May 17 00:21:56.273477 systemd-logind[1450]: Session 7 logged out. Waiting for processes to exit. May 17 00:21:56.277300 systemd-logind[1450]: Removed session 7. May 17 00:21:59.101801 systemd[1]: Created slice kubepods-besteffort-podb687e2f0_1d34_4e0d_88c4_df55408a4ba3.slice - libcontainer container kubepods-besteffort-podb687e2f0_1d34_4e0d_88c4_df55408a4ba3.slice. May 17 00:21:59.183267 kubelet[2505]: I0517 00:21:59.183218 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b687e2f0-1d34-4e0d-88c4-df55408a4ba3-typha-certs\") pod \"calico-typha-66945b6c98-bnrgl\" (UID: \"b687e2f0-1d34-4e0d-88c4-df55408a4ba3\") " pod="calico-system/calico-typha-66945b6c98-bnrgl" May 17 00:21:59.183267 kubelet[2505]: I0517 00:21:59.183267 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbzq2\" (UniqueName: \"kubernetes.io/projected/b687e2f0-1d34-4e0d-88c4-df55408a4ba3-kube-api-access-gbzq2\") pod \"calico-typha-66945b6c98-bnrgl\" (UID: \"b687e2f0-1d34-4e0d-88c4-df55408a4ba3\") " pod="calico-system/calico-typha-66945b6c98-bnrgl" May 17 00:21:59.183267 kubelet[2505]: I0517 00:21:59.183285 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b687e2f0-1d34-4e0d-88c4-df55408a4ba3-tigera-ca-bundle\") pod \"calico-typha-66945b6c98-bnrgl\" (UID: \"b687e2f0-1d34-4e0d-88c4-df55408a4ba3\") " pod="calico-system/calico-typha-66945b6c98-bnrgl" May 17 00:21:59.387073 systemd[1]: Created slice kubepods-besteffort-pod7de0b31b_3185_497f_bf11_3c897e71ea9e.slice - libcontainer container kubepods-besteffort-pod7de0b31b_3185_497f_bf11_3c897e71ea9e.slice. May 17 00:21:59.409927 kubelet[2505]: E0517 00:21:59.409288 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:21:59.410178 containerd[1467]: time="2025-05-17T00:21:59.410149140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66945b6c98-bnrgl,Uid:b687e2f0-1d34-4e0d-88c4-df55408a4ba3,Namespace:calico-system,Attempt:0,}" May 17 00:21:59.431059 containerd[1467]: time="2025-05-17T00:21:59.430098898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:21:59.431059 containerd[1467]: time="2025-05-17T00:21:59.430141961Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:21:59.431059 containerd[1467]: time="2025-05-17T00:21:59.430155806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:59.431059 containerd[1467]: time="2025-05-17T00:21:59.430214664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:59.453792 systemd[1]: Started cri-containerd-22bcbb5f84358cb70bf42afd1d59e20265cd46566f3093255f9d3e7ab553ae1b.scope - libcontainer container 22bcbb5f84358cb70bf42afd1d59e20265cd46566f3093255f9d3e7ab553ae1b. May 17 00:21:59.486275 kubelet[2505]: I0517 00:21:59.486242 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7de0b31b-3185-497f-bf11-3c897e71ea9e-flexvol-driver-host\") pod \"calico-node-qnnql\" (UID: \"7de0b31b-3185-497f-bf11-3c897e71ea9e\") " pod="calico-system/calico-node-qnnql" May 17 00:21:59.486487 kubelet[2505]: I0517 00:21:59.486410 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7de0b31b-3185-497f-bf11-3c897e71ea9e-var-lib-calico\") pod \"calico-node-qnnql\" (UID: \"7de0b31b-3185-497f-bf11-3c897e71ea9e\") " pod="calico-system/calico-node-qnnql" May 17 00:21:59.486633 kubelet[2505]: I0517 00:21:59.486619 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7de0b31b-3185-497f-bf11-3c897e71ea9e-var-run-calico\") pod \"calico-node-qnnql\" (UID: \"7de0b31b-3185-497f-bf11-3c897e71ea9e\") " pod="calico-system/calico-node-qnnql" May 17 00:21:59.486766 kubelet[2505]: I0517 00:21:59.486752 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7de0b31b-3185-497f-bf11-3c897e71ea9e-node-certs\") pod \"calico-node-qnnql\" (UID: \"7de0b31b-3185-497f-bf11-3c897e71ea9e\") " pod="calico-system/calico-node-qnnql" May 17 00:21:59.486975 kubelet[2505]: I0517 00:21:59.486920 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7de0b31b-3185-497f-bf11-3c897e71ea9e-xtables-lock\") pod \"calico-node-qnnql\" (UID: \"7de0b31b-3185-497f-bf11-3c897e71ea9e\") " pod="calico-system/calico-node-qnnql" May 17 00:21:59.486975 kubelet[2505]: I0517 00:21:59.486942 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvjxp\" (UniqueName: \"kubernetes.io/projected/7de0b31b-3185-497f-bf11-3c897e71ea9e-kube-api-access-tvjxp\") pod \"calico-node-qnnql\" (UID: \"7de0b31b-3185-497f-bf11-3c897e71ea9e\") " pod="calico-system/calico-node-qnnql" May 17 00:21:59.487120 kubelet[2505]: I0517 00:21:59.486957 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7de0b31b-3185-497f-bf11-3c897e71ea9e-cni-log-dir\") pod \"calico-node-qnnql\" (UID: \"7de0b31b-3185-497f-bf11-3c897e71ea9e\") " pod="calico-system/calico-node-qnnql" May 17 00:21:59.487120 kubelet[2505]: I0517 00:21:59.487084 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7de0b31b-3185-497f-bf11-3c897e71ea9e-policysync\") pod \"calico-node-qnnql\" (UID: \"7de0b31b-3185-497f-bf11-3c897e71ea9e\") " pod="calico-system/calico-node-qnnql" May 17 00:21:59.487455 kubelet[2505]: I0517 00:21:59.487265 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7de0b31b-3185-497f-bf11-3c897e71ea9e-tigera-ca-bundle\") pod \"calico-node-qnnql\" (UID: \"7de0b31b-3185-497f-bf11-3c897e71ea9e\") " pod="calico-system/calico-node-qnnql" May 17 00:21:59.487455 kubelet[2505]: I0517 00:21:59.487327 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7de0b31b-3185-497f-bf11-3c897e71ea9e-cni-bin-dir\") pod \"calico-node-qnnql\" (UID: \"7de0b31b-3185-497f-bf11-3c897e71ea9e\") " pod="calico-system/calico-node-qnnql" May 17 00:21:59.487455 kubelet[2505]: I0517 00:21:59.487356 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7de0b31b-3185-497f-bf11-3c897e71ea9e-lib-modules\") pod \"calico-node-qnnql\" (UID: \"7de0b31b-3185-497f-bf11-3c897e71ea9e\") " pod="calico-system/calico-node-qnnql" May 17 00:21:59.487455 kubelet[2505]: I0517 00:21:59.487381 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7de0b31b-3185-497f-bf11-3c897e71ea9e-cni-net-dir\") pod \"calico-node-qnnql\" (UID: \"7de0b31b-3185-497f-bf11-3c897e71ea9e\") " pod="calico-system/calico-node-qnnql" May 17 00:21:59.512135 containerd[1467]: time="2025-05-17T00:21:59.512055376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66945b6c98-bnrgl,Uid:b687e2f0-1d34-4e0d-88c4-df55408a4ba3,Namespace:calico-system,Attempt:0,} returns sandbox id \"22bcbb5f84358cb70bf42afd1d59e20265cd46566f3093255f9d3e7ab553ae1b\"" May 17 00:21:59.513501 kubelet[2505]: E0517 00:21:59.513119 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:21:59.514711 containerd[1467]: time="2025-05-17T00:21:59.514606497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 17 00:21:59.589996 kubelet[2505]: E0517 00:21:59.589396 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.589996 kubelet[2505]: W0517 00:21:59.589414 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.589996 kubelet[2505]: E0517 00:21:59.589436 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.589996 kubelet[2505]: E0517 00:21:59.589643 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.589996 kubelet[2505]: W0517 00:21:59.589651 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.589996 kubelet[2505]: E0517 00:21:59.589660 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.589996 kubelet[2505]: E0517 00:21:59.589831 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.589996 kubelet[2505]: W0517 00:21:59.589839 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.589996 kubelet[2505]: E0517 00:21:59.589846 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.590246 kubelet[2505]: E0517 00:21:59.590079 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.590246 kubelet[2505]: W0517 00:21:59.590088 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.590246 kubelet[2505]: E0517 00:21:59.590102 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.592204 kubelet[2505]: E0517 00:21:59.592149 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.592204 kubelet[2505]: W0517 00:21:59.592166 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.592204 kubelet[2505]: E0517 00:21:59.592181 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.596219 kubelet[2505]: E0517 00:21:59.596203 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.596219 kubelet[2505]: W0517 00:21:59.596216 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.596310 kubelet[2505]: E0517 00:21:59.596227 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.679171 kubelet[2505]: E0517 00:21:59.679062 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hrmvr" podUID="c739a616-a481-41f3-a04d-de803459e701" May 17 00:21:59.690976 containerd[1467]: time="2025-05-17T00:21:59.690936842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qnnql,Uid:7de0b31b-3185-497f-bf11-3c897e71ea9e,Namespace:calico-system,Attempt:0,}" May 17 00:21:59.709185 containerd[1467]: time="2025-05-17T00:21:59.708415865Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:21:59.709185 containerd[1467]: time="2025-05-17T00:21:59.708549087Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:21:59.709185 containerd[1467]: time="2025-05-17T00:21:59.708562561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:59.709185 containerd[1467]: time="2025-05-17T00:21:59.708627411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:59.728788 systemd[1]: Started cri-containerd-f4bbd5de871f8d7ca50c73e0023a454ff14044d65739bf01cb6c1aaef683aa99.scope - libcontainer container f4bbd5de871f8d7ca50c73e0023a454ff14044d65739bf01cb6c1aaef683aa99. May 17 00:21:59.750014 containerd[1467]: time="2025-05-17T00:21:59.749974461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qnnql,Uid:7de0b31b-3185-497f-bf11-3c897e71ea9e,Namespace:calico-system,Attempt:0,} returns sandbox id \"f4bbd5de871f8d7ca50c73e0023a454ff14044d65739bf01cb6c1aaef683aa99\"" May 17 00:21:59.778712 kubelet[2505]: E0517 00:21:59.778652 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.778712 kubelet[2505]: W0517 00:21:59.778703 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.778847 kubelet[2505]: E0517 00:21:59.778730 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.779015 kubelet[2505]: E0517 00:21:59.778996 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.779015 kubelet[2505]: W0517 00:21:59.779009 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.779070 kubelet[2505]: E0517 00:21:59.779018 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.779292 kubelet[2505]: E0517 00:21:59.779274 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.779292 kubelet[2505]: W0517 00:21:59.779286 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.779352 kubelet[2505]: E0517 00:21:59.779322 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.779612 kubelet[2505]: E0517 00:21:59.779595 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.779612 kubelet[2505]: W0517 00:21:59.779608 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.779685 kubelet[2505]: E0517 00:21:59.779616 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.779942 kubelet[2505]: E0517 00:21:59.779926 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.779942 kubelet[2505]: W0517 00:21:59.779938 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.780003 kubelet[2505]: E0517 00:21:59.779946 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.780189 kubelet[2505]: E0517 00:21:59.780172 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.780189 kubelet[2505]: W0517 00:21:59.780184 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.780234 kubelet[2505]: E0517 00:21:59.780192 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.780430 kubelet[2505]: E0517 00:21:59.780415 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.780430 kubelet[2505]: W0517 00:21:59.780426 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.780481 kubelet[2505]: E0517 00:21:59.780434 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.780683 kubelet[2505]: E0517 00:21:59.780658 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.780683 kubelet[2505]: W0517 00:21:59.780681 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.780726 kubelet[2505]: E0517 00:21:59.780689 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.780916 kubelet[2505]: E0517 00:21:59.780887 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.780916 kubelet[2505]: W0517 00:21:59.780899 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.780916 kubelet[2505]: E0517 00:21:59.780907 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.781143 kubelet[2505]: E0517 00:21:59.781126 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.781143 kubelet[2505]: W0517 00:21:59.781138 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.781206 kubelet[2505]: E0517 00:21:59.781186 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.781438 kubelet[2505]: E0517 00:21:59.781405 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.781438 kubelet[2505]: W0517 00:21:59.781418 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.781511 kubelet[2505]: E0517 00:21:59.781426 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.781713 kubelet[2505]: E0517 00:21:59.781701 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.781713 kubelet[2505]: W0517 00:21:59.781711 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.781774 kubelet[2505]: E0517 00:21:59.781719 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.781918 kubelet[2505]: E0517 00:21:59.781902 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.781970 kubelet[2505]: W0517 00:21:59.781932 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.781970 kubelet[2505]: E0517 00:21:59.781940 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.782125 kubelet[2505]: E0517 00:21:59.782109 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.782125 kubelet[2505]: W0517 00:21:59.782121 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.782200 kubelet[2505]: E0517 00:21:59.782129 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.782341 kubelet[2505]: E0517 00:21:59.782315 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.782341 kubelet[2505]: W0517 00:21:59.782326 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.782341 kubelet[2505]: E0517 00:21:59.782334 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.782514 kubelet[2505]: E0517 00:21:59.782498 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.782514 kubelet[2505]: W0517 00:21:59.782508 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.782561 kubelet[2505]: E0517 00:21:59.782515 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.783095 kubelet[2505]: E0517 00:21:59.783072 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.783095 kubelet[2505]: W0517 00:21:59.783090 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.783212 kubelet[2505]: E0517 00:21:59.783100 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.783503 kubelet[2505]: E0517 00:21:59.783455 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.783503 kubelet[2505]: W0517 00:21:59.783466 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.783678 kubelet[2505]: E0517 00:21:59.783596 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.784039 kubelet[2505]: E0517 00:21:59.783954 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.784039 kubelet[2505]: W0517 00:21:59.783965 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.784039 kubelet[2505]: E0517 00:21:59.783973 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.784333 kubelet[2505]: E0517 00:21:59.784247 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.784333 kubelet[2505]: W0517 00:21:59.784257 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.784333 kubelet[2505]: E0517 00:21:59.784265 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.788778 kubelet[2505]: E0517 00:21:59.788762 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.788778 kubelet[2505]: W0517 00:21:59.788775 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.788844 kubelet[2505]: E0517 00:21:59.788784 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.788844 kubelet[2505]: I0517 00:21:59.788804 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c739a616-a481-41f3-a04d-de803459e701-kubelet-dir\") pod \"csi-node-driver-hrmvr\" (UID: \"c739a616-a481-41f3-a04d-de803459e701\") " pod="calico-system/csi-node-driver-hrmvr" May 17 00:21:59.789007 kubelet[2505]: E0517 00:21:59.788992 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.789007 kubelet[2505]: W0517 00:21:59.789004 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.789062 kubelet[2505]: E0517 00:21:59.789013 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.789062 kubelet[2505]: I0517 00:21:59.789025 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c739a616-a481-41f3-a04d-de803459e701-socket-dir\") pod \"csi-node-driver-hrmvr\" (UID: \"c739a616-a481-41f3-a04d-de803459e701\") " pod="calico-system/csi-node-driver-hrmvr" May 17 00:21:59.789236 kubelet[2505]: E0517 00:21:59.789220 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.789236 kubelet[2505]: W0517 00:21:59.789233 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.789289 kubelet[2505]: E0517 00:21:59.789241 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.789289 kubelet[2505]: I0517 00:21:59.789265 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c739a616-a481-41f3-a04d-de803459e701-varrun\") pod \"csi-node-driver-hrmvr\" (UID: \"c739a616-a481-41f3-a04d-de803459e701\") " pod="calico-system/csi-node-driver-hrmvr" May 17 00:21:59.789475 kubelet[2505]: E0517 00:21:59.789462 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.789475 kubelet[2505]: W0517 00:21:59.789474 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.789531 kubelet[2505]: E0517 00:21:59.789481 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.789531 kubelet[2505]: I0517 00:21:59.789508 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c739a616-a481-41f3-a04d-de803459e701-registration-dir\") pod \"csi-node-driver-hrmvr\" (UID: \"c739a616-a481-41f3-a04d-de803459e701\") " pod="calico-system/csi-node-driver-hrmvr" May 17 00:21:59.789721 kubelet[2505]: E0517 00:21:59.789707 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.789721 kubelet[2505]: W0517 00:21:59.789719 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.789765 kubelet[2505]: E0517 00:21:59.789726 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.789765 kubelet[2505]: I0517 00:21:59.789753 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s922\" (UniqueName: \"kubernetes.io/projected/c739a616-a481-41f3-a04d-de803459e701-kube-api-access-2s922\") pod \"csi-node-driver-hrmvr\" (UID: \"c739a616-a481-41f3-a04d-de803459e701\") " pod="calico-system/csi-node-driver-hrmvr" May 17 00:21:59.790018 kubelet[2505]: E0517 00:21:59.790004 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.790018 kubelet[2505]: W0517 00:21:59.790017 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.790071 kubelet[2505]: E0517 00:21:59.790025 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.790241 kubelet[2505]: E0517 00:21:59.790228 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.790241 kubelet[2505]: W0517 00:21:59.790239 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.790283 kubelet[2505]: E0517 00:21:59.790247 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.790461 kubelet[2505]: E0517 00:21:59.790448 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.790461 kubelet[2505]: W0517 00:21:59.790460 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.790510 kubelet[2505]: E0517 00:21:59.790467 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.790686 kubelet[2505]: E0517 00:21:59.790650 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.790686 kubelet[2505]: W0517 00:21:59.790683 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.790730 kubelet[2505]: E0517 00:21:59.790691 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.790882 kubelet[2505]: E0517 00:21:59.790868 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.790882 kubelet[2505]: W0517 00:21:59.790880 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.790937 kubelet[2505]: E0517 00:21:59.790887 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.791065 kubelet[2505]: E0517 00:21:59.791052 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.791065 kubelet[2505]: W0517 00:21:59.791063 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.791112 kubelet[2505]: E0517 00:21:59.791071 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.791248 kubelet[2505]: E0517 00:21:59.791236 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.791248 kubelet[2505]: W0517 00:21:59.791246 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.791300 kubelet[2505]: E0517 00:21:59.791254 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.791426 kubelet[2505]: E0517 00:21:59.791413 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.791426 kubelet[2505]: W0517 00:21:59.791424 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.791467 kubelet[2505]: E0517 00:21:59.791431 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.791602 kubelet[2505]: E0517 00:21:59.791590 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.791602 kubelet[2505]: W0517 00:21:59.791600 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.791701 kubelet[2505]: E0517 00:21:59.791607 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.791853 kubelet[2505]: E0517 00:21:59.791839 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.791853 kubelet[2505]: W0517 00:21:59.791851 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.791901 kubelet[2505]: E0517 00:21:59.791859 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.891019 kubelet[2505]: E0517 00:21:59.890999 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.891019 kubelet[2505]: W0517 00:21:59.891012 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.891103 kubelet[2505]: E0517 00:21:59.891024 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.891245 kubelet[2505]: E0517 00:21:59.891231 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.891245 kubelet[2505]: W0517 00:21:59.891241 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.891292 kubelet[2505]: E0517 00:21:59.891249 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.891481 kubelet[2505]: E0517 00:21:59.891469 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.891481 kubelet[2505]: W0517 00:21:59.891479 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.891541 kubelet[2505]: E0517 00:21:59.891487 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.891717 kubelet[2505]: E0517 00:21:59.891704 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.891717 kubelet[2505]: W0517 00:21:59.891714 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.891764 kubelet[2505]: E0517 00:21:59.891722 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.891939 kubelet[2505]: E0517 00:21:59.891927 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.891939 kubelet[2505]: W0517 00:21:59.891937 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.891998 kubelet[2505]: E0517 00:21:59.891945 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.892170 kubelet[2505]: E0517 00:21:59.892158 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.892170 kubelet[2505]: W0517 00:21:59.892168 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.892221 kubelet[2505]: E0517 00:21:59.892176 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.892376 kubelet[2505]: E0517 00:21:59.892365 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.892376 kubelet[2505]: W0517 00:21:59.892374 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.892431 kubelet[2505]: E0517 00:21:59.892382 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.892584 kubelet[2505]: E0517 00:21:59.892568 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.892584 kubelet[2505]: W0517 00:21:59.892581 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.892690 kubelet[2505]: E0517 00:21:59.892592 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.892817 kubelet[2505]: E0517 00:21:59.892803 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.892817 kubelet[2505]: W0517 00:21:59.892815 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.892878 kubelet[2505]: E0517 00:21:59.892824 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.893026 kubelet[2505]: E0517 00:21:59.893012 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.893026 kubelet[2505]: W0517 00:21:59.893024 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.893108 kubelet[2505]: E0517 00:21:59.893032 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.893222 kubelet[2505]: E0517 00:21:59.893208 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.893222 kubelet[2505]: W0517 00:21:59.893219 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.893280 kubelet[2505]: E0517 00:21:59.893227 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.893419 kubelet[2505]: E0517 00:21:59.893406 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.893419 kubelet[2505]: W0517 00:21:59.893416 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.893474 kubelet[2505]: E0517 00:21:59.893425 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.893728 kubelet[2505]: E0517 00:21:59.893715 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.893728 kubelet[2505]: W0517 00:21:59.893726 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.893808 kubelet[2505]: E0517 00:21:59.893734 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.893931 kubelet[2505]: E0517 00:21:59.893917 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.893931 kubelet[2505]: W0517 00:21:59.893928 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.893981 kubelet[2505]: E0517 00:21:59.893937 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.894142 kubelet[2505]: E0517 00:21:59.894129 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.894142 kubelet[2505]: W0517 00:21:59.894139 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.894273 kubelet[2505]: E0517 00:21:59.894147 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.894341 kubelet[2505]: E0517 00:21:59.894330 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.894341 kubelet[2505]: W0517 00:21:59.894340 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.894378 kubelet[2505]: E0517 00:21:59.894347 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.894530 kubelet[2505]: E0517 00:21:59.894516 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.894530 kubelet[2505]: W0517 00:21:59.894527 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.894590 kubelet[2505]: E0517 00:21:59.894537 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.894765 kubelet[2505]: E0517 00:21:59.894751 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.894765 kubelet[2505]: W0517 00:21:59.894762 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.894841 kubelet[2505]: E0517 00:21:59.894770 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.894970 kubelet[2505]: E0517 00:21:59.894959 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.894970 kubelet[2505]: W0517 00:21:59.894969 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.895051 kubelet[2505]: E0517 00:21:59.894976 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.895164 kubelet[2505]: E0517 00:21:59.895151 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.895164 kubelet[2505]: W0517 00:21:59.895162 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.895213 kubelet[2505]: E0517 00:21:59.895170 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.895382 kubelet[2505]: E0517 00:21:59.895370 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.895382 kubelet[2505]: W0517 00:21:59.895381 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.895440 kubelet[2505]: E0517 00:21:59.895388 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.895580 kubelet[2505]: E0517 00:21:59.895567 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.895580 kubelet[2505]: W0517 00:21:59.895578 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.895636 kubelet[2505]: E0517 00:21:59.895585 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.895833 kubelet[2505]: E0517 00:21:59.895819 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.895833 kubelet[2505]: W0517 00:21:59.895830 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.895892 kubelet[2505]: E0517 00:21:59.895839 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.896334 kubelet[2505]: E0517 00:21:59.896299 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.896334 kubelet[2505]: W0517 00:21:59.896311 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.896334 kubelet[2505]: E0517 00:21:59.896320 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.896715 kubelet[2505]: E0517 00:21:59.896559 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.896715 kubelet[2505]: W0517 00:21:59.896590 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.896715 kubelet[2505]: E0517 00:21:59.896599 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:59.902894 kubelet[2505]: E0517 00:21:59.902880 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:59.902894 kubelet[2505]: W0517 00:21:59.902891 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:59.902963 kubelet[2505]: E0517 00:21:59.902900 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:00.352425 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount726046219.mount: Deactivated successfully. May 17 00:22:00.788564 containerd[1467]: time="2025-05-17T00:22:00.787952461Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:00.788564 containerd[1467]: time="2025-05-17T00:22:00.788515840Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=35158669" May 17 00:22:00.789048 containerd[1467]: time="2025-05-17T00:22:00.789008427Z" level=info msg="ImageCreate event name:\"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:00.790304 containerd[1467]: time="2025-05-17T00:22:00.790283428Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:00.790918 containerd[1467]: time="2025-05-17T00:22:00.790890529Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"35158523\" in 1.2760559s" May 17 00:22:00.790950 containerd[1467]: time="2025-05-17T00:22:00.790919088Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\"" May 17 00:22:00.791860 containerd[1467]: time="2025-05-17T00:22:00.791826310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 17 00:22:00.805055 containerd[1467]: time="2025-05-17T00:22:00.805008850Z" level=info msg="CreateContainer within sandbox \"22bcbb5f84358cb70bf42afd1d59e20265cd46566f3093255f9d3e7ab553ae1b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 17 00:22:00.818067 containerd[1467]: time="2025-05-17T00:22:00.818031964Z" level=info msg="CreateContainer within sandbox \"22bcbb5f84358cb70bf42afd1d59e20265cd46566f3093255f9d3e7ab553ae1b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c4c3e0bac943b2a4d573e0a8dd597638a677f182327b9be82aa73379913001b3\"" May 17 00:22:00.828455 containerd[1467]: time="2025-05-17T00:22:00.828431083Z" level=info msg="StartContainer for \"c4c3e0bac943b2a4d573e0a8dd597638a677f182327b9be82aa73379913001b3\"" May 17 00:22:00.853854 systemd[1]: Started cri-containerd-c4c3e0bac943b2a4d573e0a8dd597638a677f182327b9be82aa73379913001b3.scope - libcontainer container c4c3e0bac943b2a4d573e0a8dd597638a677f182327b9be82aa73379913001b3. May 17 00:22:00.892794 containerd[1467]: time="2025-05-17T00:22:00.892737479Z" level=info msg="StartContainer for \"c4c3e0bac943b2a4d573e0a8dd597638a677f182327b9be82aa73379913001b3\" returns successfully" May 17 00:22:01.275050 kubelet[2505]: E0517 00:22:01.274363 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hrmvr" podUID="c739a616-a481-41f3-a04d-de803459e701" May 17 00:22:01.340363 kubelet[2505]: E0517 00:22:01.340331 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:22:01.349096 kubelet[2505]: I0517 00:22:01.348527 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-66945b6c98-bnrgl" podStartSLOduration=1.070997736 podStartE2EDuration="2.348514783s" podCreationTimestamp="2025-05-17 00:21:59 +0000 UTC" firstStartedPulling="2025-05-17 00:21:59.51420077 +0000 UTC m=+18.334106769" lastFinishedPulling="2025-05-17 00:22:00.791717817 +0000 UTC m=+19.611623816" observedRunningTime="2025-05-17 00:22:01.34832852 +0000 UTC m=+20.168234519" watchObservedRunningTime="2025-05-17 00:22:01.348514783 +0000 UTC m=+20.168420782" May 17 00:22:01.399781 kubelet[2505]: E0517 00:22:01.399749 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:01.400160 kubelet[2505]: W0517 00:22:01.399998 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:01.400160 kubelet[2505]: E0517 00:22:01.400023 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:01.400529 kubelet[2505]: E0517 00:22:01.400464 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:01.400529 kubelet[2505]: W0517 00:22:01.400475 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:01.400529 kubelet[2505]: E0517 00:22:01.400484 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:01.401188 kubelet[2505]: E0517 00:22:01.401047 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:01.401188 kubelet[2505]: W0517 00:22:01.401077 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:01.401188 kubelet[2505]: E0517 00:22:01.401097 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:01.401511 kubelet[2505]: E0517 00:22:01.401403 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:01.401511 kubelet[2505]: W0517 00:22:01.401415 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:01.401511 kubelet[2505]: E0517 00:22:01.401444 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:01.401816 kubelet[2505]: E0517 00:22:01.401721 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:01.401816 kubelet[2505]: W0517 00:22:01.401733 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:01.401816 kubelet[2505]: E0517 00:22:01.401741 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:01.401993 kubelet[2505]: E0517 00:22:01.401961 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:01.401993 kubelet[2505]: W0517 00:22:01.401970 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:01.401993 kubelet[2505]: E0517 00:22:01.401978 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:01.402234 kubelet[2505]: E0517 00:22:01.402190 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:01.402234 kubelet[2505]: W0517 00:22:01.402198 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:01.402234 kubelet[2505]: E0517 00:22:01.402206 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:01.402629 kubelet[2505]: E0517 00:22:01.402466 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:01.402629 kubelet[2505]: W0517 00:22:01.402479 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:01.402629 kubelet[2505]: E0517 00:22:01.402487 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:01.402778 kubelet[2505]: E0517 00:22:01.402726 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:01.402778 kubelet[2505]: W0517 00:22:01.402735 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:01.402778 kubelet[2505]: E0517 00:22:01.402743 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:01.403094 kubelet[2505]: E0517 00:22:01.402944 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:01.403094 kubelet[2505]: W0517 00:22:01.402955 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:01.403094 kubelet[2505]: E0517 00:22:01.402962 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:01.403454 kubelet[2505]: E0517 00:22:01.403158 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:01.403454 kubelet[2505]: W0517 00:22:01.403166 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:01.403454 kubelet[2505]: E0517 00:22:01.403173 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:01.403454 kubelet[2505]: E0517 00:22:01.403423 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:01.403454 kubelet[2505]: W0517 00:22:01.403432 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:01.403454 kubelet[2505]: E0517 00:22:01.403439 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:01.403784 kubelet[2505]: E0517 00:22:01.403758 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:01.403784 kubelet[2505]: W0517 00:22:01.403773 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:01.403884 kubelet[2505]: E0517 00:22:01.403800 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:01.404616 kubelet[2505]: E0517 00:22:01.404501 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:01.404616 kubelet[2505]: W0517 00:22:01.404604 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:01.404616 kubelet[2505]: E0517 00:22:01.404615 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:01.405459 kubelet[2505]: E0517 00:22:01.405430 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:01.405459 kubelet[2505]: W0517 00:22:01.405446 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:01.405801 kubelet[2505]: E0517 00:22:01.405553 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:01.406751 kubelet[2505]: E0517 00:22:01.406496 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:01.406751 kubelet[2505]: W0517 00:22:01.406510 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:01.406751 kubelet[2505]: E0517 00:22:01.406519 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:01.407548 kubelet[2505]: E0517 00:22:01.407531 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:01.407548 kubelet[2505]: W0517 00:22:01.407545 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:01.407628 kubelet[2505]: E0517 00:22:01.407557 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:01.407902 kubelet[2505]: E0517 00:22:01.407879 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:01.407902 kubelet[2505]: W0517 00:22:01.407891 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:01.407902 kubelet[2505]: E0517 00:22:01.407899 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:01.408186 kubelet[2505]: E0517 00:22:01.408149 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:01.408186 kubelet[2505]: W0517 00:22:01.408163 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:01.408253 kubelet[2505]: E0517 00:22:01.408195 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:01.408581 kubelet[2505]: E0517 00:22:01.408474 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:01.408581 kubelet[2505]: W0517 00:22:01.408489 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:01.408581 kubelet[2505]: E0517 00:22:01.408497 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:01.408975 kubelet[2505]: E0517 00:22:01.408938 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:01.408975 kubelet[2505]: W0517 00:22:01.408949 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:01.408975 kubelet[2505]: E0517 00:22:01.408958 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:01.409329 kubelet[2505]: E0517 00:22:01.409271 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:01.409329 kubelet[2505]: W0517 00:22:01.409294 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:01.409329 kubelet[2505]: E0517 00:22:01.409302 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:01.409631 kubelet[2505]: E0517 00:22:01.409577 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:01.409631 kubelet[2505]: W0517 00:22:01.409588 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:01.409631 kubelet[2505]: E0517 00:22:01.409596 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:01.409889 kubelet[2505]: E0517 00:22:01.409863 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:01.409889 kubelet[2505]: W0517 00:22:01.409880 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:01.409889 kubelet[2505]: E0517 00:22:01.409888 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:01.410161 kubelet[2505]: E0517 00:22:01.410137 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:01.410161 kubelet[2505]: W0517 00:22:01.410151 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:01.410161 kubelet[2505]: E0517 00:22:01.410159 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:01.410480 kubelet[2505]: E0517 00:22:01.410412 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:01.410480 kubelet[2505]: W0517 00:22:01.410423 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:01.410480 kubelet[2505]: E0517 00:22:01.410431 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:01.410784 kubelet[2505]: E0517 00:22:01.410720 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:01.410784 kubelet[2505]: W0517 00:22:01.410741 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:01.410784 kubelet[2505]: E0517 00:22:01.410750 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:01.411227 kubelet[2505]: E0517 00:22:01.410999 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:01.411227 kubelet[2505]: W0517 00:22:01.411010 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:01.411227 kubelet[2505]: E0517 00:22:01.411019 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:01.411483 kubelet[2505]: E0517 00:22:01.411461 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:01.411483 kubelet[2505]: W0517 00:22:01.411476 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:01.411535 kubelet[2505]: E0517 00:22:01.411484 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:01.411765 kubelet[2505]: E0517 00:22:01.411745 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:01.411765 kubelet[2505]: W0517 00:22:01.411759 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:01.411818 kubelet[2505]: E0517 00:22:01.411768 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:01.411992 kubelet[2505]: E0517 00:22:01.411942 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:01.411992 kubelet[2505]: W0517 00:22:01.411954 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:01.411992 kubelet[2505]: E0517 00:22:01.411982 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:01.412441 kubelet[2505]: E0517 00:22:01.412221 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:01.412441 kubelet[2505]: W0517 00:22:01.412232 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:01.412441 kubelet[2505]: E0517 00:22:01.412241 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:01.412770 kubelet[2505]: E0517 00:22:01.412750 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:22:01.412770 kubelet[2505]: W0517 00:22:01.412764 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:22:01.412817 kubelet[2505]: E0517 00:22:01.412773 2505 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:22:01.464000 containerd[1467]: time="2025-05-17T00:22:01.463970337Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:01.464656 containerd[1467]: time="2025-05-17T00:22:01.464614041Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=4441619" May 17 00:22:01.464959 containerd[1467]: time="2025-05-17T00:22:01.464919038Z" level=info msg="ImageCreate event name:\"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:01.466496 containerd[1467]: time="2025-05-17T00:22:01.466443953Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:01.467439 containerd[1467]: time="2025-05-17T00:22:01.467094378Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5934282\" in 675.242461ms" May 17 00:22:01.467439 containerd[1467]: time="2025-05-17T00:22:01.467130238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\"" May 17 00:22:01.470121 containerd[1467]: time="2025-05-17T00:22:01.470096565Z" level=info msg="CreateContainer within sandbox \"f4bbd5de871f8d7ca50c73e0023a454ff14044d65739bf01cb6c1aaef683aa99\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 17 00:22:01.494444 containerd[1467]: time="2025-05-17T00:22:01.494404757Z" level=info msg="CreateContainer within sandbox \"f4bbd5de871f8d7ca50c73e0023a454ff14044d65739bf01cb6c1aaef683aa99\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e409385c8257d23d1f99943db8aea6a7d44cedf03ecaf47f9e293a1d5a9c2e1a\"" May 17 00:22:01.495019 containerd[1467]: time="2025-05-17T00:22:01.494961975Z" level=info msg="StartContainer for \"e409385c8257d23d1f99943db8aea6a7d44cedf03ecaf47f9e293a1d5a9c2e1a\"" May 17 00:22:01.524387 systemd[1]: run-containerd-runc-k8s.io-e409385c8257d23d1f99943db8aea6a7d44cedf03ecaf47f9e293a1d5a9c2e1a-runc.LeYrxN.mount: Deactivated successfully. May 17 00:22:01.530796 systemd[1]: Started cri-containerd-e409385c8257d23d1f99943db8aea6a7d44cedf03ecaf47f9e293a1d5a9c2e1a.scope - libcontainer container e409385c8257d23d1f99943db8aea6a7d44cedf03ecaf47f9e293a1d5a9c2e1a. May 17 00:22:01.563240 containerd[1467]: time="2025-05-17T00:22:01.563164835Z" level=info msg="StartContainer for \"e409385c8257d23d1f99943db8aea6a7d44cedf03ecaf47f9e293a1d5a9c2e1a\" returns successfully" May 17 00:22:01.579183 systemd[1]: cri-containerd-e409385c8257d23d1f99943db8aea6a7d44cedf03ecaf47f9e293a1d5a9c2e1a.scope: Deactivated successfully. May 17 00:22:01.678811 containerd[1467]: time="2025-05-17T00:22:01.678764791Z" level=info msg="shim disconnected" id=e409385c8257d23d1f99943db8aea6a7d44cedf03ecaf47f9e293a1d5a9c2e1a namespace=k8s.io May 17 00:22:01.679219 containerd[1467]: time="2025-05-17T00:22:01.679025825Z" level=warning msg="cleaning up after shim disconnected" id=e409385c8257d23d1f99943db8aea6a7d44cedf03ecaf47f9e293a1d5a9c2e1a namespace=k8s.io May 17 00:22:01.679219 containerd[1467]: time="2025-05-17T00:22:01.679040159Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:22:01.691317 containerd[1467]: time="2025-05-17T00:22:01.691294123Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:22:01Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 17 00:22:02.289420 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e409385c8257d23d1f99943db8aea6a7d44cedf03ecaf47f9e293a1d5a9c2e1a-rootfs.mount: Deactivated successfully. May 17 00:22:02.342751 kubelet[2505]: I0517 00:22:02.342726 2505 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:22:02.343134 kubelet[2505]: E0517 00:22:02.343001 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:22:02.344952 containerd[1467]: time="2025-05-17T00:22:02.344448448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 17 00:22:03.277705 kubelet[2505]: E0517 00:22:03.277123 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hrmvr" podUID="c739a616-a481-41f3-a04d-de803459e701" May 17 00:22:03.883415 containerd[1467]: time="2025-05-17T00:22:03.883352820Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:03.884271 containerd[1467]: time="2025-05-17T00:22:03.884058653Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=70300568" May 17 00:22:03.884731 containerd[1467]: time="2025-05-17T00:22:03.884702261Z" level=info msg="ImageCreate event name:\"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:03.886694 containerd[1467]: time="2025-05-17T00:22:03.886415517Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:03.887466 containerd[1467]: time="2025-05-17T00:22:03.887069217Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"71793271\" in 1.542591201s" May 17 00:22:03.887466 containerd[1467]: time="2025-05-17T00:22:03.887104146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\"" May 17 00:22:03.891207 containerd[1467]: time="2025-05-17T00:22:03.891145487Z" level=info msg="CreateContainer within sandbox \"f4bbd5de871f8d7ca50c73e0023a454ff14044d65739bf01cb6c1aaef683aa99\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 17 00:22:03.900625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1370026852.mount: Deactivated successfully. May 17 00:22:03.909356 containerd[1467]: time="2025-05-17T00:22:03.909329386Z" level=info msg="CreateContainer within sandbox \"f4bbd5de871f8d7ca50c73e0023a454ff14044d65739bf01cb6c1aaef683aa99\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d7a24838a932725be281f01a8c9e23e9236165d36ec6352b0a6fe0d1e38d1f43\"" May 17 00:22:03.910179 containerd[1467]: time="2025-05-17T00:22:03.910137376Z" level=info msg="StartContainer for \"d7a24838a932725be281f01a8c9e23e9236165d36ec6352b0a6fe0d1e38d1f43\"" May 17 00:22:03.943803 systemd[1]: Started cri-containerd-d7a24838a932725be281f01a8c9e23e9236165d36ec6352b0a6fe0d1e38d1f43.scope - libcontainer container d7a24838a932725be281f01a8c9e23e9236165d36ec6352b0a6fe0d1e38d1f43. May 17 00:22:03.972391 containerd[1467]: time="2025-05-17T00:22:03.972341315Z" level=info msg="StartContainer for \"d7a24838a932725be281f01a8c9e23e9236165d36ec6352b0a6fe0d1e38d1f43\" returns successfully" May 17 00:22:04.463520 containerd[1467]: time="2025-05-17T00:22:04.463466662Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:22:04.467990 systemd[1]: cri-containerd-d7a24838a932725be281f01a8c9e23e9236165d36ec6352b0a6fe0d1e38d1f43.scope: Deactivated successfully. May 17 00:22:04.475810 kubelet[2505]: I0517 00:22:04.475549 2505 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 17 00:22:04.495402 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7a24838a932725be281f01a8c9e23e9236165d36ec6352b0a6fe0d1e38d1f43-rootfs.mount: Deactivated successfully. May 17 00:22:04.520587 kubelet[2505]: I0517 00:22:04.520492 2505 status_manager.go:895] "Failed to get status for pod" podUID="de4de8b9-3fd9-48eb-b6c1-3ea87c183557" pod="calico-system/calico-kube-controllers-58c8cb96d-rqnqs" err="pods \"calico-kube-controllers-58c8cb96d-rqnqs\" is forbidden: User \"system:node:172-233-222-141\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node '172-233-222-141' and this object" May 17 00:22:04.531504 systemd[1]: Created slice kubepods-besteffort-podde4de8b9_3fd9_48eb_b6c1_3ea87c183557.slice - libcontainer container kubepods-besteffort-podde4de8b9_3fd9_48eb_b6c1_3ea87c183557.slice. May 17 00:22:04.533697 kubelet[2505]: I0517 00:22:04.532776 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hp2h2\" (UniqueName: \"kubernetes.io/projected/de4de8b9-3fd9-48eb-b6c1-3ea87c183557-kube-api-access-hp2h2\") pod \"calico-kube-controllers-58c8cb96d-rqnqs\" (UID: \"de4de8b9-3fd9-48eb-b6c1-3ea87c183557\") " pod="calico-system/calico-kube-controllers-58c8cb96d-rqnqs" May 17 00:22:04.533697 kubelet[2505]: I0517 00:22:04.532811 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de4de8b9-3fd9-48eb-b6c1-3ea87c183557-tigera-ca-bundle\") pod \"calico-kube-controllers-58c8cb96d-rqnqs\" (UID: \"de4de8b9-3fd9-48eb-b6c1-3ea87c183557\") " pod="calico-system/calico-kube-controllers-58c8cb96d-rqnqs" May 17 00:22:04.558747 systemd[1]: Created slice kubepods-besteffort-pod84bea557_3a73_4c30_b7d9_60dca7b8e6f7.slice - libcontainer container kubepods-besteffort-pod84bea557_3a73_4c30_b7d9_60dca7b8e6f7.slice. May 17 00:22:04.560961 containerd[1467]: time="2025-05-17T00:22:04.560750448Z" level=info msg="shim disconnected" id=d7a24838a932725be281f01a8c9e23e9236165d36ec6352b0a6fe0d1e38d1f43 namespace=k8s.io May 17 00:22:04.560961 containerd[1467]: time="2025-05-17T00:22:04.560814483Z" level=warning msg="cleaning up after shim disconnected" id=d7a24838a932725be281f01a8c9e23e9236165d36ec6352b0a6fe0d1e38d1f43 namespace=k8s.io May 17 00:22:04.560961 containerd[1467]: time="2025-05-17T00:22:04.560823356Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:22:04.597780 systemd[1]: Created slice kubepods-burstable-pod48992ca7_0880_469a_be33_3fed00473f03.slice - libcontainer container kubepods-burstable-pod48992ca7_0880_469a_be33_3fed00473f03.slice. May 17 00:22:04.617617 systemd[1]: Created slice kubepods-besteffort-pod0a6cc853_64b3_4a6d_8418_b38799cbf9cb.slice - libcontainer container kubepods-besteffort-pod0a6cc853_64b3_4a6d_8418_b38799cbf9cb.slice. May 17 00:22:04.618459 systemd[1]: Created slice kubepods-besteffort-poda8d2447a_ad8d_4842_8426_24362dceb355.slice - libcontainer container kubepods-besteffort-poda8d2447a_ad8d_4842_8426_24362dceb355.slice. May 17 00:22:04.630482 systemd[1]: Created slice kubepods-besteffort-poddcfc94f9_ca7b_4e91_b245_56a309ffba77.slice - libcontainer container kubepods-besteffort-poddcfc94f9_ca7b_4e91_b245_56a309ffba77.slice. May 17 00:22:04.633430 kubelet[2505]: I0517 00:22:04.633120 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-948r7\" (UniqueName: \"kubernetes.io/projected/0a6cc853-64b3-4a6d-8418-b38799cbf9cb-kube-api-access-948r7\") pod \"calico-apiserver-7d8b46c577-4dr29\" (UID: \"0a6cc853-64b3-4a6d-8418-b38799cbf9cb\") " pod="calico-apiserver/calico-apiserver-7d8b46c577-4dr29" May 17 00:22:04.633818 kubelet[2505]: I0517 00:22:04.633655 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/84bea557-3a73-4c30-b7d9-60dca7b8e6f7-calico-apiserver-certs\") pod \"calico-apiserver-7d8b46c577-g5mnw\" (UID: \"84bea557-3a73-4c30-b7d9-60dca7b8e6f7\") " pod="calico-apiserver/calico-apiserver-7d8b46c577-g5mnw" May 17 00:22:04.633818 kubelet[2505]: I0517 00:22:04.633739 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8d2447a-ad8d-4842-8426-24362dceb355-goldmane-ca-bundle\") pod \"goldmane-78d55f7ddc-rx28r\" (UID: \"a8d2447a-ad8d-4842-8426-24362dceb355\") " pod="calico-system/goldmane-78d55f7ddc-rx28r" May 17 00:22:04.633818 kubelet[2505]: I0517 00:22:04.633773 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8d2447a-ad8d-4842-8426-24362dceb355-config\") pod \"goldmane-78d55f7ddc-rx28r\" (UID: \"a8d2447a-ad8d-4842-8426-24362dceb355\") " pod="calico-system/goldmane-78d55f7ddc-rx28r" May 17 00:22:04.634021 kubelet[2505]: I0517 00:22:04.634008 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcfc94f9-ca7b-4e91-b245-56a309ffba77-whisker-ca-bundle\") pod \"whisker-6c76d77cb8-5vbxx\" (UID: \"dcfc94f9-ca7b-4e91-b245-56a309ffba77\") " pod="calico-system/whisker-6c76d77cb8-5vbxx" May 17 00:22:04.634150 kubelet[2505]: I0517 00:22:04.634108 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzr5d\" (UniqueName: \"kubernetes.io/projected/84bea557-3a73-4c30-b7d9-60dca7b8e6f7-kube-api-access-dzr5d\") pod \"calico-apiserver-7d8b46c577-g5mnw\" (UID: \"84bea557-3a73-4c30-b7d9-60dca7b8e6f7\") " pod="calico-apiserver/calico-apiserver-7d8b46c577-g5mnw" May 17 00:22:04.634510 kubelet[2505]: I0517 00:22:04.634231 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frntd\" (UniqueName: \"kubernetes.io/projected/a8d2447a-ad8d-4842-8426-24362dceb355-kube-api-access-frntd\") pod \"goldmane-78d55f7ddc-rx28r\" (UID: \"a8d2447a-ad8d-4842-8426-24362dceb355\") " pod="calico-system/goldmane-78d55f7ddc-rx28r" May 17 00:22:04.634510 kubelet[2505]: I0517 00:22:04.634257 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/48992ca7-0880-469a-be33-3fed00473f03-config-volume\") pod \"coredns-674b8bbfcf-jv4r6\" (UID: \"48992ca7-0880-469a-be33-3fed00473f03\") " pod="kube-system/coredns-674b8bbfcf-jv4r6" May 17 00:22:04.634510 kubelet[2505]: I0517 00:22:04.634274 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0a6cc853-64b3-4a6d-8418-b38799cbf9cb-calico-apiserver-certs\") pod \"calico-apiserver-7d8b46c577-4dr29\" (UID: \"0a6cc853-64b3-4a6d-8418-b38799cbf9cb\") " pod="calico-apiserver/calico-apiserver-7d8b46c577-4dr29" May 17 00:22:04.634692 kubelet[2505]: I0517 00:22:04.634606 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dcfc94f9-ca7b-4e91-b245-56a309ffba77-whisker-backend-key-pair\") pod \"whisker-6c76d77cb8-5vbxx\" (UID: \"dcfc94f9-ca7b-4e91-b245-56a309ffba77\") " pod="calico-system/whisker-6c76d77cb8-5vbxx" May 17 00:22:04.634692 kubelet[2505]: I0517 00:22:04.634632 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6f83297f-0f6d-448a-89e2-0744aceeab4a-config-volume\") pod \"coredns-674b8bbfcf-hrlj7\" (UID: \"6f83297f-0f6d-448a-89e2-0744aceeab4a\") " pod="kube-system/coredns-674b8bbfcf-hrlj7" May 17 00:22:04.634821 kubelet[2505]: I0517 00:22:04.634800 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2lh4\" (UniqueName: \"kubernetes.io/projected/48992ca7-0880-469a-be33-3fed00473f03-kube-api-access-w2lh4\") pod \"coredns-674b8bbfcf-jv4r6\" (UID: \"48992ca7-0880-469a-be33-3fed00473f03\") " pod="kube-system/coredns-674b8bbfcf-jv4r6" May 17 00:22:04.634927 kubelet[2505]: I0517 00:22:04.634898 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjfkt\" (UniqueName: \"kubernetes.io/projected/6f83297f-0f6d-448a-89e2-0744aceeab4a-kube-api-access-bjfkt\") pod \"coredns-674b8bbfcf-hrlj7\" (UID: \"6f83297f-0f6d-448a-89e2-0744aceeab4a\") " pod="kube-system/coredns-674b8bbfcf-hrlj7" May 17 00:22:04.635256 kubelet[2505]: I0517 00:22:04.635014 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlcmh\" (UniqueName: \"kubernetes.io/projected/dcfc94f9-ca7b-4e91-b245-56a309ffba77-kube-api-access-tlcmh\") pod \"whisker-6c76d77cb8-5vbxx\" (UID: \"dcfc94f9-ca7b-4e91-b245-56a309ffba77\") " pod="calico-system/whisker-6c76d77cb8-5vbxx" May 17 00:22:04.635256 kubelet[2505]: I0517 00:22:04.635045 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/a8d2447a-ad8d-4842-8426-24362dceb355-goldmane-key-pair\") pod \"goldmane-78d55f7ddc-rx28r\" (UID: \"a8d2447a-ad8d-4842-8426-24362dceb355\") " pod="calico-system/goldmane-78d55f7ddc-rx28r" May 17 00:22:04.637113 systemd[1]: Created slice kubepods-burstable-pod6f83297f_0f6d_448a_89e2_0744aceeab4a.slice - libcontainer container kubepods-burstable-pod6f83297f_0f6d_448a_89e2_0744aceeab4a.slice. May 17 00:22:04.849522 containerd[1467]: time="2025-05-17T00:22:04.849414795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58c8cb96d-rqnqs,Uid:de4de8b9-3fd9-48eb-b6c1-3ea87c183557,Namespace:calico-system,Attempt:0,}" May 17 00:22:04.875641 containerd[1467]: time="2025-05-17T00:22:04.875370711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d8b46c577-g5mnw,Uid:84bea557-3a73-4c30-b7d9-60dca7b8e6f7,Namespace:calico-apiserver,Attempt:0,}" May 17 00:22:04.903993 kubelet[2505]: E0517 00:22:04.903855 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:22:04.907274 containerd[1467]: time="2025-05-17T00:22:04.905879679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jv4r6,Uid:48992ca7-0880-469a-be33-3fed00473f03,Namespace:kube-system,Attempt:0,}" May 17 00:22:04.927643 containerd[1467]: time="2025-05-17T00:22:04.927621237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-rx28r,Uid:a8d2447a-ad8d-4842-8426-24362dceb355,Namespace:calico-system,Attempt:0,}" May 17 00:22:04.930105 containerd[1467]: time="2025-05-17T00:22:04.930084089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d8b46c577-4dr29,Uid:0a6cc853-64b3-4a6d-8418-b38799cbf9cb,Namespace:calico-apiserver,Attempt:0,}" May 17 00:22:04.938067 containerd[1467]: time="2025-05-17T00:22:04.938047620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c76d77cb8-5vbxx,Uid:dcfc94f9-ca7b-4e91-b245-56a309ffba77,Namespace:calico-system,Attempt:0,}" May 17 00:22:04.940035 kubelet[2505]: E0517 00:22:04.940007 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:22:04.941860 containerd[1467]: time="2025-05-17T00:22:04.941659299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hrlj7,Uid:6f83297f-0f6d-448a-89e2-0744aceeab4a,Namespace:kube-system,Attempt:0,}" May 17 00:22:04.944047 containerd[1467]: time="2025-05-17T00:22:04.944024967Z" level=error msg="Failed to destroy network for sandbox \"4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:04.947055 containerd[1467]: time="2025-05-17T00:22:04.947031844Z" level=error msg="encountered an error cleaning up failed sandbox \"4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:04.947251 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9-shm.mount: Deactivated successfully. May 17 00:22:04.948432 containerd[1467]: time="2025-05-17T00:22:04.947913954Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58c8cb96d-rqnqs,Uid:de4de8b9-3fd9-48eb-b6c1-3ea87c183557,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:04.948514 kubelet[2505]: E0517 00:22:04.948080 2505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:04.948514 kubelet[2505]: E0517 00:22:04.948135 2505 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58c8cb96d-rqnqs" May 17 00:22:04.948514 kubelet[2505]: E0517 00:22:04.948153 2505 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58c8cb96d-rqnqs" May 17 00:22:04.948593 kubelet[2505]: E0517 00:22:04.948219 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-58c8cb96d-rqnqs_calico-system(de4de8b9-3fd9-48eb-b6c1-3ea87c183557)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-58c8cb96d-rqnqs_calico-system(de4de8b9-3fd9-48eb-b6c1-3ea87c183557)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-58c8cb96d-rqnqs" podUID="de4de8b9-3fd9-48eb-b6c1-3ea87c183557" May 17 00:22:05.037730 containerd[1467]: time="2025-05-17T00:22:05.037632065Z" level=error msg="Failed to destroy network for sandbox \"43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:05.038163 containerd[1467]: time="2025-05-17T00:22:05.038140596Z" level=error msg="encountered an error cleaning up failed sandbox \"43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:05.038278 containerd[1467]: time="2025-05-17T00:22:05.038256443Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d8b46c577-g5mnw,Uid:84bea557-3a73-4c30-b7d9-60dca7b8e6f7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:05.038918 kubelet[2505]: E0517 00:22:05.038507 2505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:05.038918 kubelet[2505]: E0517 00:22:05.038558 2505 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d8b46c577-g5mnw" May 17 00:22:05.038918 kubelet[2505]: E0517 00:22:05.038579 2505 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d8b46c577-g5mnw" May 17 00:22:05.039030 kubelet[2505]: E0517 00:22:05.038623 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d8b46c577-g5mnw_calico-apiserver(84bea557-3a73-4c30-b7d9-60dca7b8e6f7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d8b46c577-g5mnw_calico-apiserver(84bea557-3a73-4c30-b7d9-60dca7b8e6f7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d8b46c577-g5mnw" podUID="84bea557-3a73-4c30-b7d9-60dca7b8e6f7" May 17 00:22:05.097482 containerd[1467]: time="2025-05-17T00:22:05.097433010Z" level=error msg="Failed to destroy network for sandbox \"e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:05.097995 containerd[1467]: time="2025-05-17T00:22:05.097952225Z" level=error msg="encountered an error cleaning up failed sandbox \"e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:05.098332 containerd[1467]: time="2025-05-17T00:22:05.098274811Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hrlj7,Uid:6f83297f-0f6d-448a-89e2-0744aceeab4a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:05.098779 kubelet[2505]: E0517 00:22:05.098461 2505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:05.098779 kubelet[2505]: E0517 00:22:05.098507 2505 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hrlj7" May 17 00:22:05.098779 kubelet[2505]: E0517 00:22:05.098526 2505 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hrlj7" May 17 00:22:05.098892 kubelet[2505]: E0517 00:22:05.098565 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-hrlj7_kube-system(6f83297f-0f6d-448a-89e2-0744aceeab4a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-hrlj7_kube-system(6f83297f-0f6d-448a-89e2-0744aceeab4a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-hrlj7" podUID="6f83297f-0f6d-448a-89e2-0744aceeab4a" May 17 00:22:05.103496 containerd[1467]: time="2025-05-17T00:22:05.102374087Z" level=error msg="Failed to destroy network for sandbox \"cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:05.106230 containerd[1467]: time="2025-05-17T00:22:05.106130611Z" level=error msg="encountered an error cleaning up failed sandbox \"cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:05.106873 containerd[1467]: time="2025-05-17T00:22:05.106169601Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jv4r6,Uid:48992ca7-0880-469a-be33-3fed00473f03,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:05.107500 kubelet[2505]: E0517 00:22:05.107439 2505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:05.107543 kubelet[2505]: E0517 00:22:05.107501 2505 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jv4r6" May 17 00:22:05.107543 kubelet[2505]: E0517 00:22:05.107526 2505 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jv4r6" May 17 00:22:05.107757 kubelet[2505]: E0517 00:22:05.107578 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-jv4r6_kube-system(48992ca7-0880-469a-be33-3fed00473f03)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-jv4r6_kube-system(48992ca7-0880-469a-be33-3fed00473f03)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-jv4r6" podUID="48992ca7-0880-469a-be33-3fed00473f03" May 17 00:22:05.120143 containerd[1467]: time="2025-05-17T00:22:05.120108449Z" level=error msg="Failed to destroy network for sandbox \"9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:05.121072 containerd[1467]: time="2025-05-17T00:22:05.120453581Z" level=error msg="encountered an error cleaning up failed sandbox \"9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:05.121072 containerd[1467]: time="2025-05-17T00:22:05.120495351Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c76d77cb8-5vbxx,Uid:dcfc94f9-ca7b-4e91-b245-56a309ffba77,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:05.121167 kubelet[2505]: E0517 00:22:05.120632 2505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:05.121167 kubelet[2505]: E0517 00:22:05.120710 2505 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c76d77cb8-5vbxx" May 17 00:22:05.121167 kubelet[2505]: E0517 00:22:05.120733 2505 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c76d77cb8-5vbxx" May 17 00:22:05.121233 kubelet[2505]: E0517 00:22:05.120788 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6c76d77cb8-5vbxx_calico-system(dcfc94f9-ca7b-4e91-b245-56a309ffba77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6c76d77cb8-5vbxx_calico-system(dcfc94f9-ca7b-4e91-b245-56a309ffba77)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6c76d77cb8-5vbxx" podUID="dcfc94f9-ca7b-4e91-b245-56a309ffba77" May 17 00:22:05.125162 containerd[1467]: time="2025-05-17T00:22:05.125112970Z" level=error msg="Failed to destroy network for sandbox \"639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:05.125433 containerd[1467]: time="2025-05-17T00:22:05.125399828Z" level=error msg="encountered an error cleaning up failed sandbox \"639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:05.125470 containerd[1467]: time="2025-05-17T00:22:05.125443788Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d8b46c577-4dr29,Uid:0a6cc853-64b3-4a6d-8418-b38799cbf9cb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:05.125571 kubelet[2505]: E0517 00:22:05.125542 2505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:05.125602 kubelet[2505]: E0517 00:22:05.125574 2505 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d8b46c577-4dr29" May 17 00:22:05.125602 kubelet[2505]: E0517 00:22:05.125592 2505 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d8b46c577-4dr29" May 17 00:22:05.125651 kubelet[2505]: E0517 00:22:05.125620 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d8b46c577-4dr29_calico-apiserver(0a6cc853-64b3-4a6d-8418-b38799cbf9cb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d8b46c577-4dr29_calico-apiserver(0a6cc853-64b3-4a6d-8418-b38799cbf9cb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d8b46c577-4dr29" podUID="0a6cc853-64b3-4a6d-8418-b38799cbf9cb" May 17 00:22:05.128898 containerd[1467]: time="2025-05-17T00:22:05.128855481Z" level=error msg="Failed to destroy network for sandbox \"47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:05.129231 containerd[1467]: time="2025-05-17T00:22:05.129195432Z" level=error msg="encountered an error cleaning up failed sandbox \"47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:05.129290 containerd[1467]: time="2025-05-17T00:22:05.129258127Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-rx28r,Uid:a8d2447a-ad8d-4842-8426-24362dceb355,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:05.129567 kubelet[2505]: E0517 00:22:05.129448 2505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:05.129567 kubelet[2505]: E0517 00:22:05.129477 2505 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-rx28r" May 17 00:22:05.129567 kubelet[2505]: E0517 00:22:05.129502 2505 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-rx28r" May 17 00:22:05.129683 kubelet[2505]: E0517 00:22:05.129536 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-78d55f7ddc-rx28r_calico-system(a8d2447a-ad8d-4842-8426-24362dceb355)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-78d55f7ddc-rx28r_calico-system(a8d2447a-ad8d-4842-8426-24362dceb355)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-rx28r" podUID="a8d2447a-ad8d-4842-8426-24362dceb355" May 17 00:22:05.279625 systemd[1]: Created slice kubepods-besteffort-podc739a616_a481_41f3_a04d_de803459e701.slice - libcontainer container kubepods-besteffort-podc739a616_a481_41f3_a04d_de803459e701.slice. May 17 00:22:05.281917 containerd[1467]: time="2025-05-17T00:22:05.281892643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hrmvr,Uid:c739a616-a481-41f3-a04d-de803459e701,Namespace:calico-system,Attempt:0,}" May 17 00:22:05.328922 containerd[1467]: time="2025-05-17T00:22:05.328865525Z" level=error msg="Failed to destroy network for sandbox \"1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:05.329215 containerd[1467]: time="2025-05-17T00:22:05.329182750Z" level=error msg="encountered an error cleaning up failed sandbox \"1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:05.329259 containerd[1467]: time="2025-05-17T00:22:05.329237913Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hrmvr,Uid:c739a616-a481-41f3-a04d-de803459e701,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:05.329418 kubelet[2505]: E0517 00:22:05.329387 2505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:05.329458 kubelet[2505]: E0517 00:22:05.329433 2505 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hrmvr" May 17 00:22:05.329484 kubelet[2505]: E0517 00:22:05.329454 2505 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hrmvr" May 17 00:22:05.329765 kubelet[2505]: E0517 00:22:05.329501 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hrmvr_calico-system(c739a616-a481-41f3-a04d-de803459e701)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hrmvr_calico-system(c739a616-a481-41f3-a04d-de803459e701)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hrmvr" podUID="c739a616-a481-41f3-a04d-de803459e701" May 17 00:22:05.349574 kubelet[2505]: I0517 00:22:05.349546 2505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9" May 17 00:22:05.350626 containerd[1467]: time="2025-05-17T00:22:05.350217268Z" level=info msg="StopPodSandbox for \"4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9\"" May 17 00:22:05.350626 containerd[1467]: time="2025-05-17T00:22:05.350363543Z" level=info msg="Ensure that sandbox 4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9 in task-service has been cleanup successfully" May 17 00:22:05.352618 kubelet[2505]: I0517 00:22:05.352554 2505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b" May 17 00:22:05.354321 containerd[1467]: time="2025-05-17T00:22:05.354218191Z" level=info msg="StopPodSandbox for \"9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b\"" May 17 00:22:05.354901 containerd[1467]: time="2025-05-17T00:22:05.354388271Z" level=info msg="Ensure that sandbox 9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b in task-service has been cleanup successfully" May 17 00:22:05.356629 kubelet[2505]: I0517 00:22:05.356603 2505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050" May 17 00:22:05.357006 containerd[1467]: time="2025-05-17T00:22:05.356976907Z" level=info msg="StopPodSandbox for \"cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050\"" May 17 00:22:05.358724 containerd[1467]: time="2025-05-17T00:22:05.358493128Z" level=info msg="Ensure that sandbox cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050 in task-service has been cleanup successfully" May 17 00:22:05.365693 containerd[1467]: time="2025-05-17T00:22:05.363275156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 17 00:22:05.367424 kubelet[2505]: I0517 00:22:05.366752 2505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75" May 17 00:22:05.370808 containerd[1467]: time="2025-05-17T00:22:05.370547798Z" level=info msg="StopPodSandbox for \"e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75\"" May 17 00:22:05.371357 containerd[1467]: time="2025-05-17T00:22:05.371076353Z" level=info msg="Ensure that sandbox e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75 in task-service has been cleanup successfully" May 17 00:22:05.373848 kubelet[2505]: I0517 00:22:05.373818 2505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d" May 17 00:22:05.375218 containerd[1467]: time="2025-05-17T00:22:05.375186832Z" level=info msg="StopPodSandbox for \"639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d\"" May 17 00:22:05.376128 kubelet[2505]: I0517 00:22:05.376098 2505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb" May 17 00:22:05.376728 containerd[1467]: time="2025-05-17T00:22:05.376686169Z" level=info msg="StopPodSandbox for \"47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb\"" May 17 00:22:05.377206 containerd[1467]: time="2025-05-17T00:22:05.376792164Z" level=info msg="Ensure that sandbox 639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d in task-service has been cleanup successfully" May 17 00:22:05.380047 containerd[1467]: time="2025-05-17T00:22:05.378451279Z" level=info msg="Ensure that sandbox 47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb in task-service has been cleanup successfully" May 17 00:22:05.389038 kubelet[2505]: I0517 00:22:05.388936 2505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f" May 17 00:22:05.390350 containerd[1467]: time="2025-05-17T00:22:05.390314664Z" level=info msg="StopPodSandbox for \"1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f\"" May 17 00:22:05.390483 containerd[1467]: time="2025-05-17T00:22:05.390459588Z" level=info msg="Ensure that sandbox 1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f in task-service has been cleanup successfully" May 17 00:22:05.405807 kubelet[2505]: I0517 00:22:05.405699 2505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11" May 17 00:22:05.412191 containerd[1467]: time="2025-05-17T00:22:05.412159294Z" level=info msg="StopPodSandbox for \"43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11\"" May 17 00:22:05.412352 containerd[1467]: time="2025-05-17T00:22:05.412326444Z" level=info msg="Ensure that sandbox 43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11 in task-service has been cleanup successfully" May 17 00:22:05.428888 containerd[1467]: time="2025-05-17T00:22:05.428851947Z" level=error msg="StopPodSandbox for \"9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b\" failed" error="failed to destroy network for sandbox \"9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:05.432777 kubelet[2505]: E0517 00:22:05.432723 2505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b" May 17 00:22:05.432940 kubelet[2505]: E0517 00:22:05.432767 2505 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b"} May 17 00:22:05.432940 kubelet[2505]: E0517 00:22:05.432839 2505 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dcfc94f9-ca7b-4e91-b245-56a309ffba77\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:22:05.432940 kubelet[2505]: E0517 00:22:05.432859 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dcfc94f9-ca7b-4e91-b245-56a309ffba77\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6c76d77cb8-5vbxx" podUID="dcfc94f9-ca7b-4e91-b245-56a309ffba77" May 17 00:22:05.470570 containerd[1467]: time="2025-05-17T00:22:05.470411401Z" level=error msg="StopPodSandbox for \"e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75\" failed" error="failed to destroy network for sandbox \"e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:05.471366 kubelet[2505]: E0517 00:22:05.471324 2505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75" May 17 00:22:05.471431 kubelet[2505]: E0517 00:22:05.471370 2505 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75"} May 17 00:22:05.471431 kubelet[2505]: E0517 00:22:05.471396 2505 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6f83297f-0f6d-448a-89e2-0744aceeab4a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:22:05.471431 kubelet[2505]: E0517 00:22:05.471415 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6f83297f-0f6d-448a-89e2-0744aceeab4a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-hrlj7" podUID="6f83297f-0f6d-448a-89e2-0744aceeab4a" May 17 00:22:05.482462 containerd[1467]: time="2025-05-17T00:22:05.482132632Z" level=error msg="StopPodSandbox for \"47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb\" failed" error="failed to destroy network for sandbox \"47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:05.482517 kubelet[2505]: E0517 00:22:05.482327 2505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb" May 17 00:22:05.482517 kubelet[2505]: E0517 00:22:05.482376 2505 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb"} May 17 00:22:05.482517 kubelet[2505]: E0517 00:22:05.482408 2505 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a8d2447a-ad8d-4842-8426-24362dceb355\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:22:05.482517 kubelet[2505]: E0517 00:22:05.482431 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a8d2447a-ad8d-4842-8426-24362dceb355\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-rx28r" podUID="a8d2447a-ad8d-4842-8426-24362dceb355" May 17 00:22:05.493225 containerd[1467]: time="2025-05-17T00:22:05.492953998Z" level=error msg="StopPodSandbox for \"43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11\" failed" error="failed to destroy network for sandbox \"43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:05.493277 kubelet[2505]: E0517 00:22:05.493105 2505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11" May 17 00:22:05.493277 kubelet[2505]: E0517 00:22:05.493146 2505 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11"} May 17 00:22:05.493277 kubelet[2505]: E0517 00:22:05.493170 2505 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"84bea557-3a73-4c30-b7d9-60dca7b8e6f7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:22:05.493277 kubelet[2505]: E0517 00:22:05.493200 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"84bea557-3a73-4c30-b7d9-60dca7b8e6f7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d8b46c577-g5mnw" podUID="84bea557-3a73-4c30-b7d9-60dca7b8e6f7" May 17 00:22:05.495236 containerd[1467]: time="2025-05-17T00:22:05.494929948Z" level=error msg="StopPodSandbox for \"cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050\" failed" error="failed to destroy network for sandbox \"cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:05.495316 containerd[1467]: time="2025-05-17T00:22:05.495256075Z" level=error msg="StopPodSandbox for \"4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9\" failed" error="failed to destroy network for sandbox \"4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:05.495477 kubelet[2505]: E0517 00:22:05.495419 2505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050" May 17 00:22:05.495576 kubelet[2505]: E0517 00:22:05.495552 2505 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050"} May 17 00:22:05.495819 kubelet[2505]: E0517 00:22:05.495774 2505 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"48992ca7-0880-469a-be33-3fed00473f03\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:22:05.495870 kubelet[2505]: E0517 00:22:05.495654 2505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9" May 17 00:22:05.495870 kubelet[2505]: E0517 00:22:05.495859 2505 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9"} May 17 00:22:05.495934 kubelet[2505]: E0517 00:22:05.495888 2505 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"de4de8b9-3fd9-48eb-b6c1-3ea87c183557\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:22:05.495934 kubelet[2505]: E0517 00:22:05.495910 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"de4de8b9-3fd9-48eb-b6c1-3ea87c183557\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-58c8cb96d-rqnqs" podUID="de4de8b9-3fd9-48eb-b6c1-3ea87c183557" May 17 00:22:05.496242 kubelet[2505]: E0517 00:22:05.495797 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"48992ca7-0880-469a-be33-3fed00473f03\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-jv4r6" podUID="48992ca7-0880-469a-be33-3fed00473f03" May 17 00:22:05.505704 containerd[1467]: time="2025-05-17T00:22:05.505134407Z" level=error msg="StopPodSandbox for \"639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d\" failed" error="failed to destroy network for sandbox \"639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:05.505817 kubelet[2505]: E0517 00:22:05.505379 2505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d" May 17 00:22:05.505817 kubelet[2505]: E0517 00:22:05.505437 2505 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d"} May 17 00:22:05.505817 kubelet[2505]: E0517 00:22:05.505458 2505 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0a6cc853-64b3-4a6d-8418-b38799cbf9cb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:22:05.505817 kubelet[2505]: E0517 00:22:05.505499 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0a6cc853-64b3-4a6d-8418-b38799cbf9cb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d8b46c577-4dr29" podUID="0a6cc853-64b3-4a6d-8418-b38799cbf9cb" May 17 00:22:05.506945 containerd[1467]: time="2025-05-17T00:22:05.506895916Z" level=error msg="StopPodSandbox for \"1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f\" failed" error="failed to destroy network for sandbox \"1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:22:05.507049 kubelet[2505]: E0517 00:22:05.507022 2505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f" May 17 00:22:05.507116 kubelet[2505]: E0517 00:22:05.507052 2505 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f"} May 17 00:22:05.507116 kubelet[2505]: E0517 00:22:05.507073 2505 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c739a616-a481-41f3-a04d-de803459e701\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:22:05.507116 kubelet[2505]: E0517 00:22:05.507093 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c739a616-a481-41f3-a04d-de803459e701\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hrmvr" podUID="c739a616-a481-41f3-a04d-de803459e701" May 17 00:22:05.899929 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb-shm.mount: Deactivated successfully. May 17 00:22:05.900034 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050-shm.mount: Deactivated successfully. May 17 00:22:05.900104 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11-shm.mount: Deactivated successfully. May 17 00:22:08.565535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount148460754.mount: Deactivated successfully. May 17 00:22:08.590909 containerd[1467]: time="2025-05-17T00:22:08.590870883Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:08.591658 containerd[1467]: time="2025-05-17T00:22:08.591614598Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=156396372" May 17 00:22:08.593403 containerd[1467]: time="2025-05-17T00:22:08.592413066Z" level=info msg="ImageCreate event name:\"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:08.594544 containerd[1467]: time="2025-05-17T00:22:08.593876243Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:08.594544 containerd[1467]: time="2025-05-17T00:22:08.594436810Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"156396234\" in 3.231133908s" May 17 00:22:08.594544 containerd[1467]: time="2025-05-17T00:22:08.594462346Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\"" May 17 00:22:08.616771 containerd[1467]: time="2025-05-17T00:22:08.616735372Z" level=info msg="CreateContainer within sandbox \"f4bbd5de871f8d7ca50c73e0023a454ff14044d65739bf01cb6c1aaef683aa99\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 17 00:22:08.628418 containerd[1467]: time="2025-05-17T00:22:08.628382736Z" level=info msg="CreateContainer within sandbox \"f4bbd5de871f8d7ca50c73e0023a454ff14044d65739bf01cb6c1aaef683aa99\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d294241c660b821034eb13dc9f70ecf4a7f396be5210ecfec79990e15934e6d7\"" May 17 00:22:08.629831 containerd[1467]: time="2025-05-17T00:22:08.629798403Z" level=info msg="StartContainer for \"d294241c660b821034eb13dc9f70ecf4a7f396be5210ecfec79990e15934e6d7\"" May 17 00:22:08.660903 systemd[1]: Started cri-containerd-d294241c660b821034eb13dc9f70ecf4a7f396be5210ecfec79990e15934e6d7.scope - libcontainer container d294241c660b821034eb13dc9f70ecf4a7f396be5210ecfec79990e15934e6d7. May 17 00:22:08.690687 containerd[1467]: time="2025-05-17T00:22:08.690357494Z" level=info msg="StartContainer for \"d294241c660b821034eb13dc9f70ecf4a7f396be5210ecfec79990e15934e6d7\" returns successfully" May 17 00:22:08.777825 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 17 00:22:08.777921 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 17 00:22:08.841685 containerd[1467]: time="2025-05-17T00:22:08.841383565Z" level=info msg="StopPodSandbox for \"9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b\"" May 17 00:22:08.989905 containerd[1467]: 2025-05-17 00:22:08.934 [INFO][3717] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b" May 17 00:22:08.989905 containerd[1467]: 2025-05-17 00:22:08.934 [INFO][3717] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b" iface="eth0" netns="/var/run/netns/cni-5f8c4a45-d37d-372a-ffaf-d5f1cabb7405" May 17 00:22:08.989905 containerd[1467]: 2025-05-17 00:22:08.934 [INFO][3717] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b" iface="eth0" netns="/var/run/netns/cni-5f8c4a45-d37d-372a-ffaf-d5f1cabb7405" May 17 00:22:08.989905 containerd[1467]: 2025-05-17 00:22:08.935 [INFO][3717] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b" iface="eth0" netns="/var/run/netns/cni-5f8c4a45-d37d-372a-ffaf-d5f1cabb7405" May 17 00:22:08.989905 containerd[1467]: 2025-05-17 00:22:08.935 [INFO][3717] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b" May 17 00:22:08.989905 containerd[1467]: 2025-05-17 00:22:08.935 [INFO][3717] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b" May 17 00:22:08.989905 containerd[1467]: 2025-05-17 00:22:08.964 [INFO][3732] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b" HandleID="k8s-pod-network.9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b" Workload="172--233--222--141-k8s-whisker--6c76d77cb8--5vbxx-eth0" May 17 00:22:08.989905 containerd[1467]: 2025-05-17 00:22:08.964 [INFO][3732] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:08.989905 containerd[1467]: 2025-05-17 00:22:08.964 [INFO][3732] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:08.989905 containerd[1467]: 2025-05-17 00:22:08.977 [WARNING][3732] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b" HandleID="k8s-pod-network.9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b" Workload="172--233--222--141-k8s-whisker--6c76d77cb8--5vbxx-eth0" May 17 00:22:08.989905 containerd[1467]: 2025-05-17 00:22:08.977 [INFO][3732] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b" HandleID="k8s-pod-network.9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b" Workload="172--233--222--141-k8s-whisker--6c76d77cb8--5vbxx-eth0" May 17 00:22:08.989905 containerd[1467]: 2025-05-17 00:22:08.978 [INFO][3732] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:08.989905 containerd[1467]: 2025-05-17 00:22:08.986 [INFO][3717] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b" May 17 00:22:08.991258 containerd[1467]: time="2025-05-17T00:22:08.990544433Z" level=info msg="TearDown network for sandbox \"9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b\" successfully" May 17 00:22:08.991258 containerd[1467]: time="2025-05-17T00:22:08.990569539Z" level=info msg="StopPodSandbox for \"9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b\" returns successfully" May 17 00:22:09.070849 kubelet[2505]: I0517 00:22:09.070795 2505 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dcfc94f9-ca7b-4e91-b245-56a309ffba77-whisker-backend-key-pair\") pod \"dcfc94f9-ca7b-4e91-b245-56a309ffba77\" (UID: \"dcfc94f9-ca7b-4e91-b245-56a309ffba77\") " May 17 00:22:09.070849 kubelet[2505]: I0517 00:22:09.070830 2505 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlcmh\" (UniqueName: \"kubernetes.io/projected/dcfc94f9-ca7b-4e91-b245-56a309ffba77-kube-api-access-tlcmh\") pod \"dcfc94f9-ca7b-4e91-b245-56a309ffba77\" (UID: \"dcfc94f9-ca7b-4e91-b245-56a309ffba77\") " May 17 00:22:09.070849 kubelet[2505]: I0517 00:22:09.070849 2505 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcfc94f9-ca7b-4e91-b245-56a309ffba77-whisker-ca-bundle\") pod \"dcfc94f9-ca7b-4e91-b245-56a309ffba77\" (UID: \"dcfc94f9-ca7b-4e91-b245-56a309ffba77\") " May 17 00:22:09.071638 kubelet[2505]: I0517 00:22:09.071245 2505 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcfc94f9-ca7b-4e91-b245-56a309ffba77-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "dcfc94f9-ca7b-4e91-b245-56a309ffba77" (UID: "dcfc94f9-ca7b-4e91-b245-56a309ffba77"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 00:22:09.074736 kubelet[2505]: I0517 00:22:09.074640 2505 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcfc94f9-ca7b-4e91-b245-56a309ffba77-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "dcfc94f9-ca7b-4e91-b245-56a309ffba77" (UID: "dcfc94f9-ca7b-4e91-b245-56a309ffba77"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 17 00:22:09.074939 kubelet[2505]: I0517 00:22:09.074902 2505 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcfc94f9-ca7b-4e91-b245-56a309ffba77-kube-api-access-tlcmh" (OuterVolumeSpecName: "kube-api-access-tlcmh") pod "dcfc94f9-ca7b-4e91-b245-56a309ffba77" (UID: "dcfc94f9-ca7b-4e91-b245-56a309ffba77"). InnerVolumeSpecName "kube-api-access-tlcmh". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:22:09.171257 kubelet[2505]: I0517 00:22:09.171184 2505 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dcfc94f9-ca7b-4e91-b245-56a309ffba77-whisker-backend-key-pair\") on node \"172-233-222-141\" DevicePath \"\"" May 17 00:22:09.171257 kubelet[2505]: I0517 00:22:09.171208 2505 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tlcmh\" (UniqueName: \"kubernetes.io/projected/dcfc94f9-ca7b-4e91-b245-56a309ffba77-kube-api-access-tlcmh\") on node \"172-233-222-141\" DevicePath \"\"" May 17 00:22:09.171257 kubelet[2505]: I0517 00:22:09.171218 2505 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcfc94f9-ca7b-4e91-b245-56a309ffba77-whisker-ca-bundle\") on node \"172-233-222-141\" DevicePath \"\"" May 17 00:22:09.280721 systemd[1]: Removed slice kubepods-besteffort-poddcfc94f9_ca7b_4e91_b245_56a309ffba77.slice - libcontainer container kubepods-besteffort-poddcfc94f9_ca7b_4e91_b245_56a309ffba77.slice. May 17 00:22:09.438294 kubelet[2505]: I0517 00:22:09.437264 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-qnnql" podStartSLOduration=1.5931899139999999 podStartE2EDuration="10.437244749s" podCreationTimestamp="2025-05-17 00:21:59 +0000 UTC" firstStartedPulling="2025-05-17 00:21:59.751217501 +0000 UTC m=+18.571123500" lastFinishedPulling="2025-05-17 00:22:08.595272336 +0000 UTC m=+27.415178335" observedRunningTime="2025-05-17 00:22:09.427939653 +0000 UTC m=+28.247845652" watchObservedRunningTime="2025-05-17 00:22:09.437244749 +0000 UTC m=+28.257150788" May 17 00:22:09.481139 systemd[1]: Created slice kubepods-besteffort-podfbe987ff_c3c8_4769_8d91_b50b803b038b.slice - libcontainer container kubepods-besteffort-podfbe987ff_c3c8_4769_8d91_b50b803b038b.slice. May 17 00:22:09.571059 systemd[1]: run-netns-cni\x2d5f8c4a45\x2dd37d\x2d372a\x2dffaf\x2dd5f1cabb7405.mount: Deactivated successfully. May 17 00:22:09.571168 systemd[1]: var-lib-kubelet-pods-dcfc94f9\x2dca7b\x2d4e91\x2db245\x2d56a309ffba77-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtlcmh.mount: Deactivated successfully. May 17 00:22:09.573628 kubelet[2505]: I0517 00:22:09.573323 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbe987ff-c3c8-4769-8d91-b50b803b038b-whisker-ca-bundle\") pod \"whisker-79c6b7464b-t786w\" (UID: \"fbe987ff-c3c8-4769-8d91-b50b803b038b\") " pod="calico-system/whisker-79c6b7464b-t786w" May 17 00:22:09.573628 kubelet[2505]: I0517 00:22:09.573364 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fbe987ff-c3c8-4769-8d91-b50b803b038b-whisker-backend-key-pair\") pod \"whisker-79c6b7464b-t786w\" (UID: \"fbe987ff-c3c8-4769-8d91-b50b803b038b\") " pod="calico-system/whisker-79c6b7464b-t786w" May 17 00:22:09.573628 kubelet[2505]: I0517 00:22:09.573386 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nscn\" (UniqueName: \"kubernetes.io/projected/fbe987ff-c3c8-4769-8d91-b50b803b038b-kube-api-access-4nscn\") pod \"whisker-79c6b7464b-t786w\" (UID: \"fbe987ff-c3c8-4769-8d91-b50b803b038b\") " pod="calico-system/whisker-79c6b7464b-t786w" May 17 00:22:09.571242 systemd[1]: var-lib-kubelet-pods-dcfc94f9\x2dca7b\x2d4e91\x2db245\x2d56a309ffba77-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 17 00:22:09.789414 containerd[1467]: time="2025-05-17T00:22:09.789282161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79c6b7464b-t786w,Uid:fbe987ff-c3c8-4769-8d91-b50b803b038b,Namespace:calico-system,Attempt:0,}" May 17 00:22:09.901822 systemd-networkd[1392]: cali4e489843c7a: Link UP May 17 00:22:09.902567 systemd-networkd[1392]: cali4e489843c7a: Gained carrier May 17 00:22:09.919014 containerd[1467]: 2025-05-17 00:22:09.823 [INFO][3753] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:22:09.919014 containerd[1467]: 2025-05-17 00:22:09.834 [INFO][3753] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--233--222--141-k8s-whisker--79c6b7464b--t786w-eth0 whisker-79c6b7464b- calico-system fbe987ff-c3c8-4769-8d91-b50b803b038b 892 0 2025-05-17 00:22:09 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:79c6b7464b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-233-222-141 whisker-79c6b7464b-t786w eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali4e489843c7a [] [] }} ContainerID="537e511cdd6c1ce2c4f68ad198ff25e9b5f26d0253b6fcba2bb765bdc963a424" Namespace="calico-system" Pod="whisker-79c6b7464b-t786w" WorkloadEndpoint="172--233--222--141-k8s-whisker--79c6b7464b--t786w-" May 17 00:22:09.919014 containerd[1467]: 2025-05-17 00:22:09.834 [INFO][3753] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="537e511cdd6c1ce2c4f68ad198ff25e9b5f26d0253b6fcba2bb765bdc963a424" Namespace="calico-system" Pod="whisker-79c6b7464b-t786w" WorkloadEndpoint="172--233--222--141-k8s-whisker--79c6b7464b--t786w-eth0" May 17 00:22:09.919014 containerd[1467]: 2025-05-17 00:22:09.859 [INFO][3765] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="537e511cdd6c1ce2c4f68ad198ff25e9b5f26d0253b6fcba2bb765bdc963a424" HandleID="k8s-pod-network.537e511cdd6c1ce2c4f68ad198ff25e9b5f26d0253b6fcba2bb765bdc963a424" Workload="172--233--222--141-k8s-whisker--79c6b7464b--t786w-eth0" May 17 00:22:09.919014 containerd[1467]: 2025-05-17 00:22:09.859 [INFO][3765] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="537e511cdd6c1ce2c4f68ad198ff25e9b5f26d0253b6fcba2bb765bdc963a424" HandleID="k8s-pod-network.537e511cdd6c1ce2c4f68ad198ff25e9b5f26d0253b6fcba2bb765bdc963a424" Workload="172--233--222--141-k8s-whisker--79c6b7464b--t786w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000235630), Attrs:map[string]string{"namespace":"calico-system", "node":"172-233-222-141", "pod":"whisker-79c6b7464b-t786w", "timestamp":"2025-05-17 00:22:09.859561044 +0000 UTC"}, Hostname:"172-233-222-141", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:22:09.919014 containerd[1467]: 2025-05-17 00:22:09.859 [INFO][3765] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:09.919014 containerd[1467]: 2025-05-17 00:22:09.859 [INFO][3765] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:09.919014 containerd[1467]: 2025-05-17 00:22:09.859 [INFO][3765] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-233-222-141' May 17 00:22:09.919014 containerd[1467]: 2025-05-17 00:22:09.865 [INFO][3765] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.537e511cdd6c1ce2c4f68ad198ff25e9b5f26d0253b6fcba2bb765bdc963a424" host="172-233-222-141" May 17 00:22:09.919014 containerd[1467]: 2025-05-17 00:22:09.869 [INFO][3765] ipam/ipam.go 394: Looking up existing affinities for host host="172-233-222-141" May 17 00:22:09.919014 containerd[1467]: 2025-05-17 00:22:09.875 [INFO][3765] ipam/ipam.go 511: Trying affinity for 192.168.24.192/26 host="172-233-222-141" May 17 00:22:09.919014 containerd[1467]: 2025-05-17 00:22:09.877 [INFO][3765] ipam/ipam.go 158: Attempting to load block cidr=192.168.24.192/26 host="172-233-222-141" May 17 00:22:09.919014 containerd[1467]: 2025-05-17 00:22:09.878 [INFO][3765] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.24.192/26 host="172-233-222-141" May 17 00:22:09.919014 containerd[1467]: 2025-05-17 00:22:09.878 [INFO][3765] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.24.192/26 handle="k8s-pod-network.537e511cdd6c1ce2c4f68ad198ff25e9b5f26d0253b6fcba2bb765bdc963a424" host="172-233-222-141" May 17 00:22:09.919014 containerd[1467]: 2025-05-17 00:22:09.879 [INFO][3765] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.537e511cdd6c1ce2c4f68ad198ff25e9b5f26d0253b6fcba2bb765bdc963a424 May 17 00:22:09.919014 containerd[1467]: 2025-05-17 00:22:09.883 [INFO][3765] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.24.192/26 handle="k8s-pod-network.537e511cdd6c1ce2c4f68ad198ff25e9b5f26d0253b6fcba2bb765bdc963a424" host="172-233-222-141" May 17 00:22:09.919014 containerd[1467]: 2025-05-17 00:22:09.889 [INFO][3765] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.24.193/26] block=192.168.24.192/26 handle="k8s-pod-network.537e511cdd6c1ce2c4f68ad198ff25e9b5f26d0253b6fcba2bb765bdc963a424" host="172-233-222-141" May 17 00:22:09.919014 containerd[1467]: 2025-05-17 00:22:09.890 [INFO][3765] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.24.193/26] handle="k8s-pod-network.537e511cdd6c1ce2c4f68ad198ff25e9b5f26d0253b6fcba2bb765bdc963a424" host="172-233-222-141" May 17 00:22:09.919014 containerd[1467]: 2025-05-17 00:22:09.890 [INFO][3765] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:09.919014 containerd[1467]: 2025-05-17 00:22:09.890 [INFO][3765] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.24.193/26] IPv6=[] ContainerID="537e511cdd6c1ce2c4f68ad198ff25e9b5f26d0253b6fcba2bb765bdc963a424" HandleID="k8s-pod-network.537e511cdd6c1ce2c4f68ad198ff25e9b5f26d0253b6fcba2bb765bdc963a424" Workload="172--233--222--141-k8s-whisker--79c6b7464b--t786w-eth0" May 17 00:22:09.919875 containerd[1467]: 2025-05-17 00:22:09.892 [INFO][3753] cni-plugin/k8s.go 418: Populated endpoint ContainerID="537e511cdd6c1ce2c4f68ad198ff25e9b5f26d0253b6fcba2bb765bdc963a424" Namespace="calico-system" Pod="whisker-79c6b7464b-t786w" WorkloadEndpoint="172--233--222--141-k8s-whisker--79c6b7464b--t786w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--141-k8s-whisker--79c6b7464b--t786w-eth0", GenerateName:"whisker-79c6b7464b-", Namespace:"calico-system", SelfLink:"", UID:"fbe987ff-c3c8-4769-8d91-b50b803b038b", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 22, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"79c6b7464b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-141", ContainerID:"", Pod:"whisker-79c6b7464b-t786w", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.24.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4e489843c7a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:09.919875 containerd[1467]: 2025-05-17 00:22:09.892 [INFO][3753] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.24.193/32] ContainerID="537e511cdd6c1ce2c4f68ad198ff25e9b5f26d0253b6fcba2bb765bdc963a424" Namespace="calico-system" Pod="whisker-79c6b7464b-t786w" WorkloadEndpoint="172--233--222--141-k8s-whisker--79c6b7464b--t786w-eth0" May 17 00:22:09.919875 containerd[1467]: 2025-05-17 00:22:09.892 [INFO][3753] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4e489843c7a ContainerID="537e511cdd6c1ce2c4f68ad198ff25e9b5f26d0253b6fcba2bb765bdc963a424" Namespace="calico-system" Pod="whisker-79c6b7464b-t786w" WorkloadEndpoint="172--233--222--141-k8s-whisker--79c6b7464b--t786w-eth0" May 17 00:22:09.919875 containerd[1467]: 2025-05-17 00:22:09.903 [INFO][3753] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="537e511cdd6c1ce2c4f68ad198ff25e9b5f26d0253b6fcba2bb765bdc963a424" Namespace="calico-system" Pod="whisker-79c6b7464b-t786w" WorkloadEndpoint="172--233--222--141-k8s-whisker--79c6b7464b--t786w-eth0" May 17 00:22:09.919875 containerd[1467]: 2025-05-17 00:22:09.904 [INFO][3753] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="537e511cdd6c1ce2c4f68ad198ff25e9b5f26d0253b6fcba2bb765bdc963a424" Namespace="calico-system" Pod="whisker-79c6b7464b-t786w" WorkloadEndpoint="172--233--222--141-k8s-whisker--79c6b7464b--t786w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--141-k8s-whisker--79c6b7464b--t786w-eth0", GenerateName:"whisker-79c6b7464b-", Namespace:"calico-system", SelfLink:"", UID:"fbe987ff-c3c8-4769-8d91-b50b803b038b", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 22, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"79c6b7464b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-141", ContainerID:"537e511cdd6c1ce2c4f68ad198ff25e9b5f26d0253b6fcba2bb765bdc963a424", Pod:"whisker-79c6b7464b-t786w", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.24.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4e489843c7a", MAC:"5e:5f:42:40:dd:db", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:09.919875 containerd[1467]: 2025-05-17 00:22:09.912 [INFO][3753] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="537e511cdd6c1ce2c4f68ad198ff25e9b5f26d0253b6fcba2bb765bdc963a424" Namespace="calico-system" Pod="whisker-79c6b7464b-t786w" WorkloadEndpoint="172--233--222--141-k8s-whisker--79c6b7464b--t786w-eth0" May 17 00:22:09.942330 containerd[1467]: time="2025-05-17T00:22:09.942215252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:22:09.942496 containerd[1467]: time="2025-05-17T00:22:09.942439358Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:22:09.942496 containerd[1467]: time="2025-05-17T00:22:09.942476815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:09.942978 containerd[1467]: time="2025-05-17T00:22:09.942905971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:09.967839 systemd[1]: Started cri-containerd-537e511cdd6c1ce2c4f68ad198ff25e9b5f26d0253b6fcba2bb765bdc963a424.scope - libcontainer container 537e511cdd6c1ce2c4f68ad198ff25e9b5f26d0253b6fcba2bb765bdc963a424. May 17 00:22:10.017145 containerd[1467]: time="2025-05-17T00:22:10.017050377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79c6b7464b-t786w,Uid:fbe987ff-c3c8-4769-8d91-b50b803b038b,Namespace:calico-system,Attempt:0,} returns sandbox id \"537e511cdd6c1ce2c4f68ad198ff25e9b5f26d0253b6fcba2bb765bdc963a424\"" May 17 00:22:10.018916 containerd[1467]: time="2025-05-17T00:22:10.018854087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:22:10.141624 containerd[1467]: time="2025-05-17T00:22:10.141421627Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:22:10.144535 containerd[1467]: time="2025-05-17T00:22:10.144509126Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:22:10.145301 containerd[1467]: time="2025-05-17T00:22:10.144606885Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:22:10.145374 kubelet[2505]: E0517 00:22:10.144887 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:22:10.145374 kubelet[2505]: E0517 00:22:10.144936 2505 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:22:10.145737 kubelet[2505]: E0517 00:22:10.145070 2505 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:cd29bf53547f4577a2cbbab64c8bad8c,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4nscn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79c6b7464b-t786w_calico-system(fbe987ff-c3c8-4769-8d91-b50b803b038b): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:22:10.147446 containerd[1467]: time="2025-05-17T00:22:10.146985426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:22:10.252996 containerd[1467]: time="2025-05-17T00:22:10.252816741Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:22:10.256854 containerd[1467]: time="2025-05-17T00:22:10.256794313Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:22:10.257238 containerd[1467]: time="2025-05-17T00:22:10.256835751Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:22:10.257874 kubelet[2505]: E0517 00:22:10.257820 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:22:10.258026 kubelet[2505]: E0517 00:22:10.257885 2505 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:22:10.258073 kubelet[2505]: E0517 00:22:10.258002 2505 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4nscn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79c6b7464b-t786w_calico-system(fbe987ff-c3c8-4769-8d91-b50b803b038b): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:22:10.259574 kubelet[2505]: E0517 00:22:10.259531 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-79c6b7464b-t786w" podUID="fbe987ff-c3c8-4769-8d91-b50b803b038b" May 17 00:22:10.418062 kubelet[2505]: I0517 00:22:10.417958 2505 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:22:10.420862 kubelet[2505]: E0517 00:22:10.420484 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-79c6b7464b-t786w" podUID="fbe987ff-c3c8-4769-8d91-b50b803b038b" May 17 00:22:11.275377 kubelet[2505]: I0517 00:22:11.275332 2505 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcfc94f9-ca7b-4e91-b245-56a309ffba77" path="/var/lib/kubelet/pods/dcfc94f9-ca7b-4e91-b245-56a309ffba77/volumes" May 17 00:22:11.423294 kubelet[2505]: E0517 00:22:11.423235 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-79c6b7464b-t786w" podUID="fbe987ff-c3c8-4769-8d91-b50b803b038b" May 17 00:22:11.526863 systemd-networkd[1392]: cali4e489843c7a: Gained IPv6LL May 17 00:22:16.275163 containerd[1467]: time="2025-05-17T00:22:16.274875211Z" level=info msg="StopPodSandbox for \"cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050\"" May 17 00:22:16.275562 containerd[1467]: time="2025-05-17T00:22:16.275223015Z" level=info msg="StopPodSandbox for \"639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d\"" May 17 00:22:16.322706 kubelet[2505]: I0517 00:22:16.322116 2505 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:22:16.385070 containerd[1467]: 2025-05-17 00:22:16.324 [INFO][4040] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d" May 17 00:22:16.385070 containerd[1467]: 2025-05-17 00:22:16.325 [INFO][4040] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d" iface="eth0" netns="/var/run/netns/cni-42850fd7-6097-dce3-c1b8-1fc7f4b82cdf" May 17 00:22:16.385070 containerd[1467]: 2025-05-17 00:22:16.325 [INFO][4040] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d" iface="eth0" netns="/var/run/netns/cni-42850fd7-6097-dce3-c1b8-1fc7f4b82cdf" May 17 00:22:16.385070 containerd[1467]: 2025-05-17 00:22:16.325 [INFO][4040] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d" iface="eth0" netns="/var/run/netns/cni-42850fd7-6097-dce3-c1b8-1fc7f4b82cdf" May 17 00:22:16.385070 containerd[1467]: 2025-05-17 00:22:16.325 [INFO][4040] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d" May 17 00:22:16.385070 containerd[1467]: 2025-05-17 00:22:16.325 [INFO][4040] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d" May 17 00:22:16.385070 containerd[1467]: 2025-05-17 00:22:16.363 [INFO][4053] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d" HandleID="k8s-pod-network.639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d" Workload="172--233--222--141-k8s-calico--apiserver--7d8b46c577--4dr29-eth0" May 17 00:22:16.385070 containerd[1467]: 2025-05-17 00:22:16.363 [INFO][4053] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:16.385070 containerd[1467]: 2025-05-17 00:22:16.363 [INFO][4053] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:16.385070 containerd[1467]: 2025-05-17 00:22:16.370 [WARNING][4053] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d" HandleID="k8s-pod-network.639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d" Workload="172--233--222--141-k8s-calico--apiserver--7d8b46c577--4dr29-eth0" May 17 00:22:16.385070 containerd[1467]: 2025-05-17 00:22:16.370 [INFO][4053] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d" HandleID="k8s-pod-network.639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d" Workload="172--233--222--141-k8s-calico--apiserver--7d8b46c577--4dr29-eth0" May 17 00:22:16.385070 containerd[1467]: 2025-05-17 00:22:16.372 [INFO][4053] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:16.385070 containerd[1467]: 2025-05-17 00:22:16.379 [INFO][4040] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d" May 17 00:22:16.386777 containerd[1467]: time="2025-05-17T00:22:16.386726742Z" level=info msg="TearDown network for sandbox \"639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d\" successfully" May 17 00:22:16.386777 containerd[1467]: time="2025-05-17T00:22:16.386764368Z" level=info msg="StopPodSandbox for \"639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d\" returns successfully" May 17 00:22:16.388073 containerd[1467]: time="2025-05-17T00:22:16.387926961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d8b46c577-4dr29,Uid:0a6cc853-64b3-4a6d-8418-b38799cbf9cb,Namespace:calico-apiserver,Attempt:1,}" May 17 00:22:16.388629 systemd[1]: run-netns-cni\x2d42850fd7\x2d6097\x2ddce3\x2dc1b8\x2d1fc7f4b82cdf.mount: Deactivated successfully. May 17 00:22:16.398187 containerd[1467]: 2025-05-17 00:22:16.321 [INFO][4033] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050" May 17 00:22:16.398187 containerd[1467]: 2025-05-17 00:22:16.321 [INFO][4033] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050" iface="eth0" netns="/var/run/netns/cni-d11d0005-f323-e40e-48f8-7a5b0f1775ac" May 17 00:22:16.398187 containerd[1467]: 2025-05-17 00:22:16.323 [INFO][4033] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050" iface="eth0" netns="/var/run/netns/cni-d11d0005-f323-e40e-48f8-7a5b0f1775ac" May 17 00:22:16.398187 containerd[1467]: 2025-05-17 00:22:16.323 [INFO][4033] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050" iface="eth0" netns="/var/run/netns/cni-d11d0005-f323-e40e-48f8-7a5b0f1775ac" May 17 00:22:16.398187 containerd[1467]: 2025-05-17 00:22:16.323 [INFO][4033] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050" May 17 00:22:16.398187 containerd[1467]: 2025-05-17 00:22:16.323 [INFO][4033] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050" May 17 00:22:16.398187 containerd[1467]: 2025-05-17 00:22:16.367 [INFO][4051] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050" HandleID="k8s-pod-network.cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050" Workload="172--233--222--141-k8s-coredns--674b8bbfcf--jv4r6-eth0" May 17 00:22:16.398187 containerd[1467]: 2025-05-17 00:22:16.368 [INFO][4051] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:16.398187 containerd[1467]: 2025-05-17 00:22:16.372 [INFO][4051] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:16.398187 containerd[1467]: 2025-05-17 00:22:16.382 [WARNING][4051] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050" HandleID="k8s-pod-network.cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050" Workload="172--233--222--141-k8s-coredns--674b8bbfcf--jv4r6-eth0" May 17 00:22:16.398187 containerd[1467]: 2025-05-17 00:22:16.382 [INFO][4051] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050" HandleID="k8s-pod-network.cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050" Workload="172--233--222--141-k8s-coredns--674b8bbfcf--jv4r6-eth0" May 17 00:22:16.398187 containerd[1467]: 2025-05-17 00:22:16.387 [INFO][4051] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:16.398187 containerd[1467]: 2025-05-17 00:22:16.393 [INFO][4033] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050" May 17 00:22:16.399909 containerd[1467]: time="2025-05-17T00:22:16.399878674Z" level=info msg="TearDown network for sandbox \"cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050\" successfully" May 17 00:22:16.399909 containerd[1467]: time="2025-05-17T00:22:16.399901858Z" level=info msg="StopPodSandbox for \"cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050\" returns successfully" May 17 00:22:16.402934 kubelet[2505]: E0517 00:22:16.401731 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:22:16.403718 systemd[1]: run-netns-cni\x2dd11d0005\x2df323\x2de40e\x2d48f8\x2d7a5b0f1775ac.mount: Deactivated successfully. May 17 00:22:16.403924 containerd[1467]: time="2025-05-17T00:22:16.403788867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jv4r6,Uid:48992ca7-0880-469a-be33-3fed00473f03,Namespace:kube-system,Attempt:1,}" May 17 00:22:16.560068 systemd-networkd[1392]: cali2b7134b9925: Link UP May 17 00:22:16.561181 systemd-networkd[1392]: cali2b7134b9925: Gained carrier May 17 00:22:16.576637 containerd[1467]: 2025-05-17 00:22:16.446 [INFO][4083] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:22:16.576637 containerd[1467]: 2025-05-17 00:22:16.456 [INFO][4083] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--233--222--141-k8s-calico--apiserver--7d8b46c577--4dr29-eth0 calico-apiserver-7d8b46c577- calico-apiserver 0a6cc853-64b3-4a6d-8418-b38799cbf9cb 933 0 2025-05-17 00:21:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d8b46c577 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-233-222-141 calico-apiserver-7d8b46c577-4dr29 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2b7134b9925 [] [] }} ContainerID="3fdb1d276a070a3a9c31219fc021f29cfa57976947fe9aaf279be40244cb818f" Namespace="calico-apiserver" Pod="calico-apiserver-7d8b46c577-4dr29" WorkloadEndpoint="172--233--222--141-k8s-calico--apiserver--7d8b46c577--4dr29-" May 17 00:22:16.576637 containerd[1467]: 2025-05-17 00:22:16.457 [INFO][4083] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3fdb1d276a070a3a9c31219fc021f29cfa57976947fe9aaf279be40244cb818f" Namespace="calico-apiserver" Pod="calico-apiserver-7d8b46c577-4dr29" WorkloadEndpoint="172--233--222--141-k8s-calico--apiserver--7d8b46c577--4dr29-eth0" May 17 00:22:16.576637 containerd[1467]: 2025-05-17 00:22:16.511 [INFO][4106] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3fdb1d276a070a3a9c31219fc021f29cfa57976947fe9aaf279be40244cb818f" HandleID="k8s-pod-network.3fdb1d276a070a3a9c31219fc021f29cfa57976947fe9aaf279be40244cb818f" Workload="172--233--222--141-k8s-calico--apiserver--7d8b46c577--4dr29-eth0" May 17 00:22:16.576637 containerd[1467]: 2025-05-17 00:22:16.511 [INFO][4106] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3fdb1d276a070a3a9c31219fc021f29cfa57976947fe9aaf279be40244cb818f" HandleID="k8s-pod-network.3fdb1d276a070a3a9c31219fc021f29cfa57976947fe9aaf279be40244cb818f" Workload="172--233--222--141-k8s-calico--apiserver--7d8b46c577--4dr29-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9020), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-233-222-141", "pod":"calico-apiserver-7d8b46c577-4dr29", "timestamp":"2025-05-17 00:22:16.509520109 +0000 UTC"}, Hostname:"172-233-222-141", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:22:16.576637 containerd[1467]: 2025-05-17 00:22:16.511 [INFO][4106] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:16.576637 containerd[1467]: 2025-05-17 00:22:16.512 [INFO][4106] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:16.576637 containerd[1467]: 2025-05-17 00:22:16.512 [INFO][4106] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-233-222-141' May 17 00:22:16.576637 containerd[1467]: 2025-05-17 00:22:16.522 [INFO][4106] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3fdb1d276a070a3a9c31219fc021f29cfa57976947fe9aaf279be40244cb818f" host="172-233-222-141" May 17 00:22:16.576637 containerd[1467]: 2025-05-17 00:22:16.529 [INFO][4106] ipam/ipam.go 394: Looking up existing affinities for host host="172-233-222-141" May 17 00:22:16.576637 containerd[1467]: 2025-05-17 00:22:16.537 [INFO][4106] ipam/ipam.go 511: Trying affinity for 192.168.24.192/26 host="172-233-222-141" May 17 00:22:16.576637 containerd[1467]: 2025-05-17 00:22:16.538 [INFO][4106] ipam/ipam.go 158: Attempting to load block cidr=192.168.24.192/26 host="172-233-222-141" May 17 00:22:16.576637 containerd[1467]: 2025-05-17 00:22:16.540 [INFO][4106] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.24.192/26 host="172-233-222-141" May 17 00:22:16.576637 containerd[1467]: 2025-05-17 00:22:16.540 [INFO][4106] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.24.192/26 handle="k8s-pod-network.3fdb1d276a070a3a9c31219fc021f29cfa57976947fe9aaf279be40244cb818f" host="172-233-222-141" May 17 00:22:16.576637 containerd[1467]: 2025-05-17 00:22:16.542 [INFO][4106] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3fdb1d276a070a3a9c31219fc021f29cfa57976947fe9aaf279be40244cb818f May 17 00:22:16.576637 containerd[1467]: 2025-05-17 00:22:16.545 [INFO][4106] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.24.192/26 handle="k8s-pod-network.3fdb1d276a070a3a9c31219fc021f29cfa57976947fe9aaf279be40244cb818f" host="172-233-222-141" May 17 00:22:16.576637 containerd[1467]: 2025-05-17 00:22:16.549 [INFO][4106] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.24.194/26] block=192.168.24.192/26 handle="k8s-pod-network.3fdb1d276a070a3a9c31219fc021f29cfa57976947fe9aaf279be40244cb818f" host="172-233-222-141" May 17 00:22:16.576637 containerd[1467]: 2025-05-17 00:22:16.549 [INFO][4106] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.24.194/26] handle="k8s-pod-network.3fdb1d276a070a3a9c31219fc021f29cfa57976947fe9aaf279be40244cb818f" host="172-233-222-141" May 17 00:22:16.576637 containerd[1467]: 2025-05-17 00:22:16.550 [INFO][4106] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:16.576637 containerd[1467]: 2025-05-17 00:22:16.550 [INFO][4106] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.24.194/26] IPv6=[] ContainerID="3fdb1d276a070a3a9c31219fc021f29cfa57976947fe9aaf279be40244cb818f" HandleID="k8s-pod-network.3fdb1d276a070a3a9c31219fc021f29cfa57976947fe9aaf279be40244cb818f" Workload="172--233--222--141-k8s-calico--apiserver--7d8b46c577--4dr29-eth0" May 17 00:22:16.578646 containerd[1467]: 2025-05-17 00:22:16.556 [INFO][4083] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3fdb1d276a070a3a9c31219fc021f29cfa57976947fe9aaf279be40244cb818f" Namespace="calico-apiserver" Pod="calico-apiserver-7d8b46c577-4dr29" WorkloadEndpoint="172--233--222--141-k8s-calico--apiserver--7d8b46c577--4dr29-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--141-k8s-calico--apiserver--7d8b46c577--4dr29-eth0", GenerateName:"calico-apiserver-7d8b46c577-", Namespace:"calico-apiserver", SelfLink:"", UID:"0a6cc853-64b3-4a6d-8418-b38799cbf9cb", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d8b46c577", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-141", ContainerID:"", Pod:"calico-apiserver-7d8b46c577-4dr29", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2b7134b9925", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:16.578646 containerd[1467]: 2025-05-17 00:22:16.556 [INFO][4083] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.24.194/32] ContainerID="3fdb1d276a070a3a9c31219fc021f29cfa57976947fe9aaf279be40244cb818f" Namespace="calico-apiserver" Pod="calico-apiserver-7d8b46c577-4dr29" WorkloadEndpoint="172--233--222--141-k8s-calico--apiserver--7d8b46c577--4dr29-eth0" May 17 00:22:16.578646 containerd[1467]: 2025-05-17 00:22:16.556 [INFO][4083] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2b7134b9925 ContainerID="3fdb1d276a070a3a9c31219fc021f29cfa57976947fe9aaf279be40244cb818f" Namespace="calico-apiserver" Pod="calico-apiserver-7d8b46c577-4dr29" WorkloadEndpoint="172--233--222--141-k8s-calico--apiserver--7d8b46c577--4dr29-eth0" May 17 00:22:16.578646 containerd[1467]: 2025-05-17 00:22:16.562 [INFO][4083] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3fdb1d276a070a3a9c31219fc021f29cfa57976947fe9aaf279be40244cb818f" Namespace="calico-apiserver" Pod="calico-apiserver-7d8b46c577-4dr29" WorkloadEndpoint="172--233--222--141-k8s-calico--apiserver--7d8b46c577--4dr29-eth0" May 17 00:22:16.578646 containerd[1467]: 2025-05-17 00:22:16.562 [INFO][4083] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3fdb1d276a070a3a9c31219fc021f29cfa57976947fe9aaf279be40244cb818f" Namespace="calico-apiserver" Pod="calico-apiserver-7d8b46c577-4dr29" WorkloadEndpoint="172--233--222--141-k8s-calico--apiserver--7d8b46c577--4dr29-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--141-k8s-calico--apiserver--7d8b46c577--4dr29-eth0", GenerateName:"calico-apiserver-7d8b46c577-", Namespace:"calico-apiserver", SelfLink:"", UID:"0a6cc853-64b3-4a6d-8418-b38799cbf9cb", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d8b46c577", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-141", ContainerID:"3fdb1d276a070a3a9c31219fc021f29cfa57976947fe9aaf279be40244cb818f", Pod:"calico-apiserver-7d8b46c577-4dr29", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2b7134b9925", MAC:"86:e7:02:e3:96:2c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:16.578646 containerd[1467]: 2025-05-17 00:22:16.571 [INFO][4083] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3fdb1d276a070a3a9c31219fc021f29cfa57976947fe9aaf279be40244cb818f" Namespace="calico-apiserver" Pod="calico-apiserver-7d8b46c577-4dr29" WorkloadEndpoint="172--233--222--141-k8s-calico--apiserver--7d8b46c577--4dr29-eth0" May 17 00:22:16.600694 containerd[1467]: time="2025-05-17T00:22:16.597509530Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:22:16.600694 containerd[1467]: time="2025-05-17T00:22:16.598975800Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:22:16.600694 containerd[1467]: time="2025-05-17T00:22:16.598998984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:16.600694 containerd[1467]: time="2025-05-17T00:22:16.599117292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:16.622799 systemd[1]: Started cri-containerd-3fdb1d276a070a3a9c31219fc021f29cfa57976947fe9aaf279be40244cb818f.scope - libcontainer container 3fdb1d276a070a3a9c31219fc021f29cfa57976947fe9aaf279be40244cb818f. May 17 00:22:16.662390 systemd-networkd[1392]: cali7e05a77e5e6: Link UP May 17 00:22:16.663202 systemd-networkd[1392]: cali7e05a77e5e6: Gained carrier May 17 00:22:16.680499 containerd[1467]: 2025-05-17 00:22:16.474 [INFO][4088] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:22:16.680499 containerd[1467]: 2025-05-17 00:22:16.489 [INFO][4088] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--233--222--141-k8s-coredns--674b8bbfcf--jv4r6-eth0 coredns-674b8bbfcf- kube-system 48992ca7-0880-469a-be33-3fed00473f03 932 0 2025-05-17 00:21:48 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-233-222-141 coredns-674b8bbfcf-jv4r6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7e05a77e5e6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="34f84c7eb21ec5eed59cea23864466c2bbb2b833c82b15ae1fc43d795227575f" Namespace="kube-system" Pod="coredns-674b8bbfcf-jv4r6" WorkloadEndpoint="172--233--222--141-k8s-coredns--674b8bbfcf--jv4r6-" May 17 00:22:16.680499 containerd[1467]: 2025-05-17 00:22:16.489 [INFO][4088] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="34f84c7eb21ec5eed59cea23864466c2bbb2b833c82b15ae1fc43d795227575f" Namespace="kube-system" Pod="coredns-674b8bbfcf-jv4r6" WorkloadEndpoint="172--233--222--141-k8s-coredns--674b8bbfcf--jv4r6-eth0" May 17 00:22:16.680499 containerd[1467]: 2025-05-17 00:22:16.533 [INFO][4116] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="34f84c7eb21ec5eed59cea23864466c2bbb2b833c82b15ae1fc43d795227575f" HandleID="k8s-pod-network.34f84c7eb21ec5eed59cea23864466c2bbb2b833c82b15ae1fc43d795227575f" Workload="172--233--222--141-k8s-coredns--674b8bbfcf--jv4r6-eth0" May 17 00:22:16.680499 containerd[1467]: 2025-05-17 00:22:16.533 [INFO][4116] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="34f84c7eb21ec5eed59cea23864466c2bbb2b833c82b15ae1fc43d795227575f" HandleID="k8s-pod-network.34f84c7eb21ec5eed59cea23864466c2bbb2b833c82b15ae1fc43d795227575f" Workload="172--233--222--141-k8s-coredns--674b8bbfcf--jv4r6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000235130), Attrs:map[string]string{"namespace":"kube-system", "node":"172-233-222-141", "pod":"coredns-674b8bbfcf-jv4r6", "timestamp":"2025-05-17 00:22:16.53319655 +0000 UTC"}, Hostname:"172-233-222-141", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:22:16.680499 containerd[1467]: 2025-05-17 00:22:16.533 [INFO][4116] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:16.680499 containerd[1467]: 2025-05-17 00:22:16.550 [INFO][4116] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:16.680499 containerd[1467]: 2025-05-17 00:22:16.550 [INFO][4116] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-233-222-141' May 17 00:22:16.680499 containerd[1467]: 2025-05-17 00:22:16.624 [INFO][4116] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.34f84c7eb21ec5eed59cea23864466c2bbb2b833c82b15ae1fc43d795227575f" host="172-233-222-141" May 17 00:22:16.680499 containerd[1467]: 2025-05-17 00:22:16.631 [INFO][4116] ipam/ipam.go 394: Looking up existing affinities for host host="172-233-222-141" May 17 00:22:16.680499 containerd[1467]: 2025-05-17 00:22:16.639 [INFO][4116] ipam/ipam.go 511: Trying affinity for 192.168.24.192/26 host="172-233-222-141" May 17 00:22:16.680499 containerd[1467]: 2025-05-17 00:22:16.640 [INFO][4116] ipam/ipam.go 158: Attempting to load block cidr=192.168.24.192/26 host="172-233-222-141" May 17 00:22:16.680499 containerd[1467]: 2025-05-17 00:22:16.642 [INFO][4116] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.24.192/26 host="172-233-222-141" May 17 00:22:16.680499 containerd[1467]: 2025-05-17 00:22:16.642 [INFO][4116] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.24.192/26 handle="k8s-pod-network.34f84c7eb21ec5eed59cea23864466c2bbb2b833c82b15ae1fc43d795227575f" host="172-233-222-141" May 17 00:22:16.680499 containerd[1467]: 2025-05-17 00:22:16.646 [INFO][4116] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.34f84c7eb21ec5eed59cea23864466c2bbb2b833c82b15ae1fc43d795227575f May 17 00:22:16.680499 containerd[1467]: 2025-05-17 00:22:16.651 [INFO][4116] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.24.192/26 handle="k8s-pod-network.34f84c7eb21ec5eed59cea23864466c2bbb2b833c82b15ae1fc43d795227575f" host="172-233-222-141" May 17 00:22:16.680499 containerd[1467]: 2025-05-17 00:22:16.655 [INFO][4116] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.24.195/26] block=192.168.24.192/26 handle="k8s-pod-network.34f84c7eb21ec5eed59cea23864466c2bbb2b833c82b15ae1fc43d795227575f" host="172-233-222-141" May 17 00:22:16.680499 containerd[1467]: 2025-05-17 00:22:16.655 [INFO][4116] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.24.195/26] handle="k8s-pod-network.34f84c7eb21ec5eed59cea23864466c2bbb2b833c82b15ae1fc43d795227575f" host="172-233-222-141" May 17 00:22:16.680499 containerd[1467]: 2025-05-17 00:22:16.655 [INFO][4116] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:16.680499 containerd[1467]: 2025-05-17 00:22:16.656 [INFO][4116] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.24.195/26] IPv6=[] ContainerID="34f84c7eb21ec5eed59cea23864466c2bbb2b833c82b15ae1fc43d795227575f" HandleID="k8s-pod-network.34f84c7eb21ec5eed59cea23864466c2bbb2b833c82b15ae1fc43d795227575f" Workload="172--233--222--141-k8s-coredns--674b8bbfcf--jv4r6-eth0" May 17 00:22:16.681081 containerd[1467]: 2025-05-17 00:22:16.659 [INFO][4088] cni-plugin/k8s.go 418: Populated endpoint ContainerID="34f84c7eb21ec5eed59cea23864466c2bbb2b833c82b15ae1fc43d795227575f" Namespace="kube-system" Pod="coredns-674b8bbfcf-jv4r6" WorkloadEndpoint="172--233--222--141-k8s-coredns--674b8bbfcf--jv4r6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--141-k8s-coredns--674b8bbfcf--jv4r6-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"48992ca7-0880-469a-be33-3fed00473f03", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-141", ContainerID:"", Pod:"coredns-674b8bbfcf-jv4r6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7e05a77e5e6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:16.681081 containerd[1467]: 2025-05-17 00:22:16.659 [INFO][4088] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.24.195/32] ContainerID="34f84c7eb21ec5eed59cea23864466c2bbb2b833c82b15ae1fc43d795227575f" Namespace="kube-system" Pod="coredns-674b8bbfcf-jv4r6" WorkloadEndpoint="172--233--222--141-k8s-coredns--674b8bbfcf--jv4r6-eth0" May 17 00:22:16.681081 containerd[1467]: 2025-05-17 00:22:16.659 [INFO][4088] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7e05a77e5e6 ContainerID="34f84c7eb21ec5eed59cea23864466c2bbb2b833c82b15ae1fc43d795227575f" Namespace="kube-system" Pod="coredns-674b8bbfcf-jv4r6" WorkloadEndpoint="172--233--222--141-k8s-coredns--674b8bbfcf--jv4r6-eth0" May 17 00:22:16.681081 containerd[1467]: 2025-05-17 00:22:16.660 [INFO][4088] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="34f84c7eb21ec5eed59cea23864466c2bbb2b833c82b15ae1fc43d795227575f" Namespace="kube-system" Pod="coredns-674b8bbfcf-jv4r6" WorkloadEndpoint="172--233--222--141-k8s-coredns--674b8bbfcf--jv4r6-eth0" May 17 00:22:16.681081 containerd[1467]: 2025-05-17 00:22:16.661 [INFO][4088] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="34f84c7eb21ec5eed59cea23864466c2bbb2b833c82b15ae1fc43d795227575f" Namespace="kube-system" Pod="coredns-674b8bbfcf-jv4r6" WorkloadEndpoint="172--233--222--141-k8s-coredns--674b8bbfcf--jv4r6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--141-k8s-coredns--674b8bbfcf--jv4r6-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"48992ca7-0880-469a-be33-3fed00473f03", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-141", ContainerID:"34f84c7eb21ec5eed59cea23864466c2bbb2b833c82b15ae1fc43d795227575f", Pod:"coredns-674b8bbfcf-jv4r6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7e05a77e5e6", MAC:"a6:24:22:66:ad:c9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:16.681081 containerd[1467]: 2025-05-17 00:22:16.676 [INFO][4088] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="34f84c7eb21ec5eed59cea23864466c2bbb2b833c82b15ae1fc43d795227575f" Namespace="kube-system" Pod="coredns-674b8bbfcf-jv4r6" WorkloadEndpoint="172--233--222--141-k8s-coredns--674b8bbfcf--jv4r6-eth0" May 17 00:22:16.685516 containerd[1467]: time="2025-05-17T00:22:16.684803623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d8b46c577-4dr29,Uid:0a6cc853-64b3-4a6d-8418-b38799cbf9cb,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"3fdb1d276a070a3a9c31219fc021f29cfa57976947fe9aaf279be40244cb818f\"" May 17 00:22:16.687073 containerd[1467]: time="2025-05-17T00:22:16.687008039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 00:22:16.701473 containerd[1467]: time="2025-05-17T00:22:16.701345586Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:22:16.701473 containerd[1467]: time="2025-05-17T00:22:16.701409166Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:22:16.701473 containerd[1467]: time="2025-05-17T00:22:16.701451142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:16.701595 containerd[1467]: time="2025-05-17T00:22:16.701528094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:16.717801 systemd[1]: Started cri-containerd-34f84c7eb21ec5eed59cea23864466c2bbb2b833c82b15ae1fc43d795227575f.scope - libcontainer container 34f84c7eb21ec5eed59cea23864466c2bbb2b833c82b15ae1fc43d795227575f. May 17 00:22:16.759687 containerd[1467]: time="2025-05-17T00:22:16.759632931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jv4r6,Uid:48992ca7-0880-469a-be33-3fed00473f03,Namespace:kube-system,Attempt:1,} returns sandbox id \"34f84c7eb21ec5eed59cea23864466c2bbb2b833c82b15ae1fc43d795227575f\"" May 17 00:22:16.760429 kubelet[2505]: E0517 00:22:16.760402 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:22:16.766137 containerd[1467]: time="2025-05-17T00:22:16.766102046Z" level=info msg="CreateContainer within sandbox \"34f84c7eb21ec5eed59cea23864466c2bbb2b833c82b15ae1fc43d795227575f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:22:16.777543 containerd[1467]: time="2025-05-17T00:22:16.777504483Z" level=info msg="CreateContainer within sandbox \"34f84c7eb21ec5eed59cea23864466c2bbb2b833c82b15ae1fc43d795227575f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b0d870b06068f75167da5debd5d0205545f8345abe31c352b1cd3031ee41a784\"" May 17 00:22:16.778011 containerd[1467]: time="2025-05-17T00:22:16.777983499Z" level=info msg="StartContainer for \"b0d870b06068f75167da5debd5d0205545f8345abe31c352b1cd3031ee41a784\"" May 17 00:22:16.813962 systemd[1]: Started cri-containerd-b0d870b06068f75167da5debd5d0205545f8345abe31c352b1cd3031ee41a784.scope - libcontainer container b0d870b06068f75167da5debd5d0205545f8345abe31c352b1cd3031ee41a784. May 17 00:22:16.851866 containerd[1467]: time="2025-05-17T00:22:16.850684053Z" level=info msg="StartContainer for \"b0d870b06068f75167da5debd5d0205545f8345abe31c352b1cd3031ee41a784\" returns successfully" May 17 00:22:17.285399 containerd[1467]: time="2025-05-17T00:22:17.285074752Z" level=info msg="StopPodSandbox for \"e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75\"" May 17 00:22:17.407762 containerd[1467]: 2025-05-17 00:22:17.363 [INFO][4309] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75" May 17 00:22:17.407762 containerd[1467]: 2025-05-17 00:22:17.364 [INFO][4309] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75" iface="eth0" netns="/var/run/netns/cni-56789f6a-bb48-aac9-33aa-cfe06af8af39" May 17 00:22:17.407762 containerd[1467]: 2025-05-17 00:22:17.364 [INFO][4309] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75" iface="eth0" netns="/var/run/netns/cni-56789f6a-bb48-aac9-33aa-cfe06af8af39" May 17 00:22:17.407762 containerd[1467]: 2025-05-17 00:22:17.364 [INFO][4309] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75" iface="eth0" netns="/var/run/netns/cni-56789f6a-bb48-aac9-33aa-cfe06af8af39" May 17 00:22:17.407762 containerd[1467]: 2025-05-17 00:22:17.364 [INFO][4309] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75" May 17 00:22:17.407762 containerd[1467]: 2025-05-17 00:22:17.365 [INFO][4309] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75" May 17 00:22:17.407762 containerd[1467]: 2025-05-17 00:22:17.388 [INFO][4317] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75" HandleID="k8s-pod-network.e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75" Workload="172--233--222--141-k8s-coredns--674b8bbfcf--hrlj7-eth0" May 17 00:22:17.407762 containerd[1467]: 2025-05-17 00:22:17.388 [INFO][4317] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:17.407762 containerd[1467]: 2025-05-17 00:22:17.389 [INFO][4317] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:17.407762 containerd[1467]: 2025-05-17 00:22:17.397 [WARNING][4317] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75" HandleID="k8s-pod-network.e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75" Workload="172--233--222--141-k8s-coredns--674b8bbfcf--hrlj7-eth0" May 17 00:22:17.407762 containerd[1467]: 2025-05-17 00:22:17.397 [INFO][4317] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75" HandleID="k8s-pod-network.e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75" Workload="172--233--222--141-k8s-coredns--674b8bbfcf--hrlj7-eth0" May 17 00:22:17.407762 containerd[1467]: 2025-05-17 00:22:17.399 [INFO][4317] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:17.407762 containerd[1467]: 2025-05-17 00:22:17.402 [INFO][4309] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75" May 17 00:22:17.408185 containerd[1467]: time="2025-05-17T00:22:17.407796284Z" level=info msg="TearDown network for sandbox \"e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75\" successfully" May 17 00:22:17.408185 containerd[1467]: time="2025-05-17T00:22:17.407819957Z" level=info msg="StopPodSandbox for \"e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75\" returns successfully" May 17 00:22:17.408230 kubelet[2505]: E0517 00:22:17.408055 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:22:17.408875 containerd[1467]: time="2025-05-17T00:22:17.408845914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hrlj7,Uid:6f83297f-0f6d-448a-89e2-0744aceeab4a,Namespace:kube-system,Attempt:1,}" May 17 00:22:17.413745 systemd[1]: run-netns-cni\x2d56789f6a\x2dbb48\x2daac9\x2d33aa\x2dcfe06af8af39.mount: Deactivated successfully. May 17 00:22:17.444908 kubelet[2505]: E0517 00:22:17.444883 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:22:17.479206 kubelet[2505]: I0517 00:22:17.479161 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-jv4r6" podStartSLOduration=29.479148147 podStartE2EDuration="29.479148147s" podCreationTimestamp="2025-05-17 00:21:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:22:17.463877078 +0000 UTC m=+36.283783077" watchObservedRunningTime="2025-05-17 00:22:17.479148147 +0000 UTC m=+36.299054156" May 17 00:22:17.603912 systemd-networkd[1392]: cali7f4d210b56e: Link UP May 17 00:22:17.604478 systemd-networkd[1392]: cali7f4d210b56e: Gained carrier May 17 00:22:17.620832 containerd[1467]: 2025-05-17 00:22:17.508 [INFO][4328] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:22:17.620832 containerd[1467]: 2025-05-17 00:22:17.528 [INFO][4328] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--233--222--141-k8s-coredns--674b8bbfcf--hrlj7-eth0 coredns-674b8bbfcf- kube-system 6f83297f-0f6d-448a-89e2-0744aceeab4a 952 0 2025-05-17 00:21:48 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-233-222-141 coredns-674b8bbfcf-hrlj7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7f4d210b56e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1c26a5618d5b2c1e8ed9861b3dffae1ede0e5c3356c40c969a44b485a606e9d9" Namespace="kube-system" Pod="coredns-674b8bbfcf-hrlj7" WorkloadEndpoint="172--233--222--141-k8s-coredns--674b8bbfcf--hrlj7-" May 17 00:22:17.620832 containerd[1467]: 2025-05-17 00:22:17.528 [INFO][4328] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1c26a5618d5b2c1e8ed9861b3dffae1ede0e5c3356c40c969a44b485a606e9d9" Namespace="kube-system" Pod="coredns-674b8bbfcf-hrlj7" WorkloadEndpoint="172--233--222--141-k8s-coredns--674b8bbfcf--hrlj7-eth0" May 17 00:22:17.620832 containerd[1467]: 2025-05-17 00:22:17.554 [INFO][4344] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1c26a5618d5b2c1e8ed9861b3dffae1ede0e5c3356c40c969a44b485a606e9d9" HandleID="k8s-pod-network.1c26a5618d5b2c1e8ed9861b3dffae1ede0e5c3356c40c969a44b485a606e9d9" Workload="172--233--222--141-k8s-coredns--674b8bbfcf--hrlj7-eth0" May 17 00:22:17.620832 containerd[1467]: 2025-05-17 00:22:17.554 [INFO][4344] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1c26a5618d5b2c1e8ed9861b3dffae1ede0e5c3356c40c969a44b485a606e9d9" HandleID="k8s-pod-network.1c26a5618d5b2c1e8ed9861b3dffae1ede0e5c3356c40c969a44b485a606e9d9" Workload="172--233--222--141-k8s-coredns--674b8bbfcf--hrlj7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000235020), Attrs:map[string]string{"namespace":"kube-system", "node":"172-233-222-141", "pod":"coredns-674b8bbfcf-hrlj7", "timestamp":"2025-05-17 00:22:17.554625747 +0000 UTC"}, Hostname:"172-233-222-141", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:22:17.620832 containerd[1467]: 2025-05-17 00:22:17.555 [INFO][4344] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:17.620832 containerd[1467]: 2025-05-17 00:22:17.555 [INFO][4344] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:17.620832 containerd[1467]: 2025-05-17 00:22:17.555 [INFO][4344] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-233-222-141' May 17 00:22:17.620832 containerd[1467]: 2025-05-17 00:22:17.563 [INFO][4344] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1c26a5618d5b2c1e8ed9861b3dffae1ede0e5c3356c40c969a44b485a606e9d9" host="172-233-222-141" May 17 00:22:17.620832 containerd[1467]: 2025-05-17 00:22:17.570 [INFO][4344] ipam/ipam.go 394: Looking up existing affinities for host host="172-233-222-141" May 17 00:22:17.620832 containerd[1467]: 2025-05-17 00:22:17.576 [INFO][4344] ipam/ipam.go 511: Trying affinity for 192.168.24.192/26 host="172-233-222-141" May 17 00:22:17.620832 containerd[1467]: 2025-05-17 00:22:17.578 [INFO][4344] ipam/ipam.go 158: Attempting to load block cidr=192.168.24.192/26 host="172-233-222-141" May 17 00:22:17.620832 containerd[1467]: 2025-05-17 00:22:17.580 [INFO][4344] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.24.192/26 host="172-233-222-141" May 17 00:22:17.620832 containerd[1467]: 2025-05-17 00:22:17.580 [INFO][4344] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.24.192/26 handle="k8s-pod-network.1c26a5618d5b2c1e8ed9861b3dffae1ede0e5c3356c40c969a44b485a606e9d9" host="172-233-222-141" May 17 00:22:17.620832 containerd[1467]: 2025-05-17 00:22:17.582 [INFO][4344] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1c26a5618d5b2c1e8ed9861b3dffae1ede0e5c3356c40c969a44b485a606e9d9 May 17 00:22:17.620832 containerd[1467]: 2025-05-17 00:22:17.586 [INFO][4344] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.24.192/26 handle="k8s-pod-network.1c26a5618d5b2c1e8ed9861b3dffae1ede0e5c3356c40c969a44b485a606e9d9" host="172-233-222-141" May 17 00:22:17.620832 containerd[1467]: 2025-05-17 00:22:17.593 [INFO][4344] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.24.196/26] block=192.168.24.192/26 handle="k8s-pod-network.1c26a5618d5b2c1e8ed9861b3dffae1ede0e5c3356c40c969a44b485a606e9d9" host="172-233-222-141" May 17 00:22:17.620832 containerd[1467]: 2025-05-17 00:22:17.593 [INFO][4344] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.24.196/26] handle="k8s-pod-network.1c26a5618d5b2c1e8ed9861b3dffae1ede0e5c3356c40c969a44b485a606e9d9" host="172-233-222-141" May 17 00:22:17.620832 containerd[1467]: 2025-05-17 00:22:17.593 [INFO][4344] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:17.620832 containerd[1467]: 2025-05-17 00:22:17.593 [INFO][4344] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.24.196/26] IPv6=[] ContainerID="1c26a5618d5b2c1e8ed9861b3dffae1ede0e5c3356c40c969a44b485a606e9d9" HandleID="k8s-pod-network.1c26a5618d5b2c1e8ed9861b3dffae1ede0e5c3356c40c969a44b485a606e9d9" Workload="172--233--222--141-k8s-coredns--674b8bbfcf--hrlj7-eth0" May 17 00:22:17.621316 containerd[1467]: 2025-05-17 00:22:17.597 [INFO][4328] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1c26a5618d5b2c1e8ed9861b3dffae1ede0e5c3356c40c969a44b485a606e9d9" Namespace="kube-system" Pod="coredns-674b8bbfcf-hrlj7" WorkloadEndpoint="172--233--222--141-k8s-coredns--674b8bbfcf--hrlj7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--141-k8s-coredns--674b8bbfcf--hrlj7-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6f83297f-0f6d-448a-89e2-0744aceeab4a", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-141", ContainerID:"", Pod:"coredns-674b8bbfcf-hrlj7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7f4d210b56e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:17.621316 containerd[1467]: 2025-05-17 00:22:17.597 [INFO][4328] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.24.196/32] ContainerID="1c26a5618d5b2c1e8ed9861b3dffae1ede0e5c3356c40c969a44b485a606e9d9" Namespace="kube-system" Pod="coredns-674b8bbfcf-hrlj7" WorkloadEndpoint="172--233--222--141-k8s-coredns--674b8bbfcf--hrlj7-eth0" May 17 00:22:17.621316 containerd[1467]: 2025-05-17 00:22:17.597 [INFO][4328] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7f4d210b56e ContainerID="1c26a5618d5b2c1e8ed9861b3dffae1ede0e5c3356c40c969a44b485a606e9d9" Namespace="kube-system" Pod="coredns-674b8bbfcf-hrlj7" WorkloadEndpoint="172--233--222--141-k8s-coredns--674b8bbfcf--hrlj7-eth0" May 17 00:22:17.621316 containerd[1467]: 2025-05-17 00:22:17.604 [INFO][4328] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1c26a5618d5b2c1e8ed9861b3dffae1ede0e5c3356c40c969a44b485a606e9d9" Namespace="kube-system" Pod="coredns-674b8bbfcf-hrlj7" WorkloadEndpoint="172--233--222--141-k8s-coredns--674b8bbfcf--hrlj7-eth0" May 17 00:22:17.621316 containerd[1467]: 2025-05-17 00:22:17.604 [INFO][4328] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1c26a5618d5b2c1e8ed9861b3dffae1ede0e5c3356c40c969a44b485a606e9d9" Namespace="kube-system" Pod="coredns-674b8bbfcf-hrlj7" WorkloadEndpoint="172--233--222--141-k8s-coredns--674b8bbfcf--hrlj7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--141-k8s-coredns--674b8bbfcf--hrlj7-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6f83297f-0f6d-448a-89e2-0744aceeab4a", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-141", ContainerID:"1c26a5618d5b2c1e8ed9861b3dffae1ede0e5c3356c40c969a44b485a606e9d9", Pod:"coredns-674b8bbfcf-hrlj7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7f4d210b56e", MAC:"7e:48:d5:ef:a2:52", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:17.621316 containerd[1467]: 2025-05-17 00:22:17.615 [INFO][4328] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1c26a5618d5b2c1e8ed9861b3dffae1ede0e5c3356c40c969a44b485a606e9d9" Namespace="kube-system" Pod="coredns-674b8bbfcf-hrlj7" WorkloadEndpoint="172--233--222--141-k8s-coredns--674b8bbfcf--hrlj7-eth0" May 17 00:22:17.648731 containerd[1467]: time="2025-05-17T00:22:17.647613994Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:22:17.649176 containerd[1467]: time="2025-05-17T00:22:17.649094669Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:22:17.649355 containerd[1467]: time="2025-05-17T00:22:17.649219688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:17.649973 containerd[1467]: time="2025-05-17T00:22:17.649915933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:17.670810 systemd[1]: Started cri-containerd-1c26a5618d5b2c1e8ed9861b3dffae1ede0e5c3356c40c969a44b485a606e9d9.scope - libcontainer container 1c26a5618d5b2c1e8ed9861b3dffae1ede0e5c3356c40c969a44b485a606e9d9. May 17 00:22:17.724655 containerd[1467]: time="2025-05-17T00:22:17.724547104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hrlj7,Uid:6f83297f-0f6d-448a-89e2-0744aceeab4a,Namespace:kube-system,Attempt:1,} returns sandbox id \"1c26a5618d5b2c1e8ed9861b3dffae1ede0e5c3356c40c969a44b485a606e9d9\"" May 17 00:22:17.725418 kubelet[2505]: E0517 00:22:17.725363 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:22:17.732489 containerd[1467]: time="2025-05-17T00:22:17.732390345Z" level=info msg="CreateContainer within sandbox \"1c26a5618d5b2c1e8ed9861b3dffae1ede0e5c3356c40c969a44b485a606e9d9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:22:17.745714 containerd[1467]: time="2025-05-17T00:22:17.745395860Z" level=info msg="CreateContainer within sandbox \"1c26a5618d5b2c1e8ed9861b3dffae1ede0e5c3356c40c969a44b485a606e9d9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b90d39f4cb2813e503602a305290c7d9b204125cf52ac39d633f1cb12ba7a2e7\"" May 17 00:22:17.746150 containerd[1467]: time="2025-05-17T00:22:17.746089105Z" level=info msg="StartContainer for \"b90d39f4cb2813e503602a305290c7d9b204125cf52ac39d633f1cb12ba7a2e7\"" May 17 00:22:17.786831 systemd[1]: Started cri-containerd-b90d39f4cb2813e503602a305290c7d9b204125cf52ac39d633f1cb12ba7a2e7.scope - libcontainer container b90d39f4cb2813e503602a305290c7d9b204125cf52ac39d633f1cb12ba7a2e7. May 17 00:22:17.834197 containerd[1467]: time="2025-05-17T00:22:17.834158456Z" level=info msg="StartContainer for \"b90d39f4cb2813e503602a305290c7d9b204125cf52ac39d633f1cb12ba7a2e7\" returns successfully" May 17 00:22:17.929211 systemd-networkd[1392]: cali2b7134b9925: Gained IPv6LL May 17 00:22:17.990779 systemd-networkd[1392]: cali7e05a77e5e6: Gained IPv6LL May 17 00:22:18.274839 containerd[1467]: time="2025-05-17T00:22:18.274809338Z" level=info msg="StopPodSandbox for \"4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9\"" May 17 00:22:18.369797 containerd[1467]: 2025-05-17 00:22:18.316 [INFO][4467] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9" May 17 00:22:18.369797 containerd[1467]: 2025-05-17 00:22:18.316 [INFO][4467] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9" iface="eth0" netns="/var/run/netns/cni-33403334-affc-97fe-1237-c240e0c9ae64" May 17 00:22:18.369797 containerd[1467]: 2025-05-17 00:22:18.319 [INFO][4467] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9" iface="eth0" netns="/var/run/netns/cni-33403334-affc-97fe-1237-c240e0c9ae64" May 17 00:22:18.369797 containerd[1467]: 2025-05-17 00:22:18.321 [INFO][4467] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9" iface="eth0" netns="/var/run/netns/cni-33403334-affc-97fe-1237-c240e0c9ae64" May 17 00:22:18.369797 containerd[1467]: 2025-05-17 00:22:18.321 [INFO][4467] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9" May 17 00:22:18.369797 containerd[1467]: 2025-05-17 00:22:18.321 [INFO][4467] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9" May 17 00:22:18.369797 containerd[1467]: 2025-05-17 00:22:18.353 [INFO][4475] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9" HandleID="k8s-pod-network.4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9" Workload="172--233--222--141-k8s-calico--kube--controllers--58c8cb96d--rqnqs-eth0" May 17 00:22:18.369797 containerd[1467]: 2025-05-17 00:22:18.354 [INFO][4475] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:18.369797 containerd[1467]: 2025-05-17 00:22:18.355 [INFO][4475] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:18.369797 containerd[1467]: 2025-05-17 00:22:18.363 [WARNING][4475] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9" HandleID="k8s-pod-network.4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9" Workload="172--233--222--141-k8s-calico--kube--controllers--58c8cb96d--rqnqs-eth0" May 17 00:22:18.369797 containerd[1467]: 2025-05-17 00:22:18.363 [INFO][4475] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9" HandleID="k8s-pod-network.4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9" Workload="172--233--222--141-k8s-calico--kube--controllers--58c8cb96d--rqnqs-eth0" May 17 00:22:18.369797 containerd[1467]: 2025-05-17 00:22:18.364 [INFO][4475] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:18.369797 containerd[1467]: 2025-05-17 00:22:18.366 [INFO][4467] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9" May 17 00:22:18.370444 containerd[1467]: time="2025-05-17T00:22:18.370131061Z" level=info msg="TearDown network for sandbox \"4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9\" successfully" May 17 00:22:18.370444 containerd[1467]: time="2025-05-17T00:22:18.370167036Z" level=info msg="StopPodSandbox for \"4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9\" returns successfully" May 17 00:22:18.371186 containerd[1467]: time="2025-05-17T00:22:18.370891522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58c8cb96d-rqnqs,Uid:de4de8b9-3fd9-48eb-b6c1-3ea87c183557,Namespace:calico-system,Attempt:1,}" May 17 00:22:18.373943 systemd[1]: run-netns-cni\x2d33403334\x2daffc\x2d97fe\x2d1237\x2dc240e0c9ae64.mount: Deactivated successfully. May 17 00:22:18.483091 kubelet[2505]: E0517 00:22:18.482458 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:22:18.483091 kubelet[2505]: E0517 00:22:18.482814 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:22:18.498368 kubelet[2505]: I0517 00:22:18.497478 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-hrlj7" podStartSLOduration=30.497466017 podStartE2EDuration="30.497466017s" podCreationTimestamp="2025-05-17 00:21:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:22:18.497236372 +0000 UTC m=+37.317142371" watchObservedRunningTime="2025-05-17 00:22:18.497466017 +0000 UTC m=+37.317372016" May 17 00:22:18.528963 systemd-networkd[1392]: calid1cdd338c99: Link UP May 17 00:22:18.534032 systemd-networkd[1392]: calid1cdd338c99: Gained carrier May 17 00:22:18.542179 kubelet[2505]: I0517 00:22:18.539752 2505 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:22:18.542179 kubelet[2505]: E0517 00:22:18.540067 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:22:18.546499 containerd[1467]: time="2025-05-17T00:22:18.546288004Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:18.548034 containerd[1467]: time="2025-05-17T00:22:18.547514294Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=47252431" May 17 00:22:18.549900 containerd[1467]: time="2025-05-17T00:22:18.548219738Z" level=info msg="ImageCreate event name:\"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:18.550200 containerd[1467]: time="2025-05-17T00:22:18.550158693Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:18.553161 containerd[1467]: time="2025-05-17T00:22:18.551998664Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 1.864944348s" May 17 00:22:18.553161 containerd[1467]: time="2025-05-17T00:22:18.552025738Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 17 00:22:18.559694 containerd[1467]: 2025-05-17 00:22:18.410 [INFO][4482] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:22:18.559694 containerd[1467]: 2025-05-17 00:22:18.420 [INFO][4482] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--233--222--141-k8s-calico--kube--controllers--58c8cb96d--rqnqs-eth0 calico-kube-controllers-58c8cb96d- calico-system de4de8b9-3fd9-48eb-b6c1-3ea87c183557 971 0 2025-05-17 00:21:59 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:58c8cb96d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-233-222-141 calico-kube-controllers-58c8cb96d-rqnqs eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid1cdd338c99 [] [] }} ContainerID="6ec47de3d6a78914ca04086098543442b993c48675e3c14ce657165c4d016a8f" Namespace="calico-system" Pod="calico-kube-controllers-58c8cb96d-rqnqs" WorkloadEndpoint="172--233--222--141-k8s-calico--kube--controllers--58c8cb96d--rqnqs-" May 17 00:22:18.559694 containerd[1467]: 2025-05-17 00:22:18.420 [INFO][4482] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6ec47de3d6a78914ca04086098543442b993c48675e3c14ce657165c4d016a8f" Namespace="calico-system" Pod="calico-kube-controllers-58c8cb96d-rqnqs" WorkloadEndpoint="172--233--222--141-k8s-calico--kube--controllers--58c8cb96d--rqnqs-eth0" May 17 00:22:18.559694 containerd[1467]: 2025-05-17 00:22:18.458 [INFO][4493] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6ec47de3d6a78914ca04086098543442b993c48675e3c14ce657165c4d016a8f" HandleID="k8s-pod-network.6ec47de3d6a78914ca04086098543442b993c48675e3c14ce657165c4d016a8f" Workload="172--233--222--141-k8s-calico--kube--controllers--58c8cb96d--rqnqs-eth0" May 17 00:22:18.559694 containerd[1467]: 2025-05-17 00:22:18.460 [INFO][4493] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6ec47de3d6a78914ca04086098543442b993c48675e3c14ce657165c4d016a8f" HandleID="k8s-pod-network.6ec47de3d6a78914ca04086098543442b993c48675e3c14ce657165c4d016a8f" Workload="172--233--222--141-k8s-calico--kube--controllers--58c8cb96d--rqnqs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9700), Attrs:map[string]string{"namespace":"calico-system", "node":"172-233-222-141", "pod":"calico-kube-controllers-58c8cb96d-rqnqs", "timestamp":"2025-05-17 00:22:18.458107582 +0000 UTC"}, Hostname:"172-233-222-141", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:22:18.559694 containerd[1467]: 2025-05-17 00:22:18.460 [INFO][4493] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:18.559694 containerd[1467]: 2025-05-17 00:22:18.460 [INFO][4493] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:18.559694 containerd[1467]: 2025-05-17 00:22:18.460 [INFO][4493] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-233-222-141' May 17 00:22:18.559694 containerd[1467]: 2025-05-17 00:22:18.468 [INFO][4493] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6ec47de3d6a78914ca04086098543442b993c48675e3c14ce657165c4d016a8f" host="172-233-222-141" May 17 00:22:18.559694 containerd[1467]: 2025-05-17 00:22:18.474 [INFO][4493] ipam/ipam.go 394: Looking up existing affinities for host host="172-233-222-141" May 17 00:22:18.559694 containerd[1467]: 2025-05-17 00:22:18.483 [INFO][4493] ipam/ipam.go 511: Trying affinity for 192.168.24.192/26 host="172-233-222-141" May 17 00:22:18.559694 containerd[1467]: 2025-05-17 00:22:18.488 [INFO][4493] ipam/ipam.go 158: Attempting to load block cidr=192.168.24.192/26 host="172-233-222-141" May 17 00:22:18.559694 containerd[1467]: 2025-05-17 00:22:18.492 [INFO][4493] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.24.192/26 host="172-233-222-141" May 17 00:22:18.559694 containerd[1467]: 2025-05-17 00:22:18.492 [INFO][4493] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.24.192/26 handle="k8s-pod-network.6ec47de3d6a78914ca04086098543442b993c48675e3c14ce657165c4d016a8f" host="172-233-222-141" May 17 00:22:18.559694 containerd[1467]: 2025-05-17 00:22:18.495 [INFO][4493] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6ec47de3d6a78914ca04086098543442b993c48675e3c14ce657165c4d016a8f May 17 00:22:18.559694 containerd[1467]: 2025-05-17 00:22:18.503 [INFO][4493] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.24.192/26 handle="k8s-pod-network.6ec47de3d6a78914ca04086098543442b993c48675e3c14ce657165c4d016a8f" host="172-233-222-141" May 17 00:22:18.559694 containerd[1467]: 2025-05-17 00:22:18.514 [INFO][4493] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.24.197/26] block=192.168.24.192/26 handle="k8s-pod-network.6ec47de3d6a78914ca04086098543442b993c48675e3c14ce657165c4d016a8f" host="172-233-222-141" May 17 00:22:18.559694 containerd[1467]: 2025-05-17 00:22:18.514 [INFO][4493] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.24.197/26] handle="k8s-pod-network.6ec47de3d6a78914ca04086098543442b993c48675e3c14ce657165c4d016a8f" host="172-233-222-141" May 17 00:22:18.559694 containerd[1467]: 2025-05-17 00:22:18.514 [INFO][4493] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:18.559694 containerd[1467]: 2025-05-17 00:22:18.514 [INFO][4493] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.24.197/26] IPv6=[] ContainerID="6ec47de3d6a78914ca04086098543442b993c48675e3c14ce657165c4d016a8f" HandleID="k8s-pod-network.6ec47de3d6a78914ca04086098543442b993c48675e3c14ce657165c4d016a8f" Workload="172--233--222--141-k8s-calico--kube--controllers--58c8cb96d--rqnqs-eth0" May 17 00:22:18.560230 containerd[1467]: 2025-05-17 00:22:18.521 [INFO][4482] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6ec47de3d6a78914ca04086098543442b993c48675e3c14ce657165c4d016a8f" Namespace="calico-system" Pod="calico-kube-controllers-58c8cb96d-rqnqs" WorkloadEndpoint="172--233--222--141-k8s-calico--kube--controllers--58c8cb96d--rqnqs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--141-k8s-calico--kube--controllers--58c8cb96d--rqnqs-eth0", GenerateName:"calico-kube-controllers-58c8cb96d-", Namespace:"calico-system", SelfLink:"", UID:"de4de8b9-3fd9-48eb-b6c1-3ea87c183557", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58c8cb96d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-141", ContainerID:"", Pod:"calico-kube-controllers-58c8cb96d-rqnqs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.24.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid1cdd338c99", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:18.560230 containerd[1467]: 2025-05-17 00:22:18.521 [INFO][4482] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.24.197/32] ContainerID="6ec47de3d6a78914ca04086098543442b993c48675e3c14ce657165c4d016a8f" Namespace="calico-system" Pod="calico-kube-controllers-58c8cb96d-rqnqs" WorkloadEndpoint="172--233--222--141-k8s-calico--kube--controllers--58c8cb96d--rqnqs-eth0" May 17 00:22:18.560230 containerd[1467]: 2025-05-17 00:22:18.521 [INFO][4482] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid1cdd338c99 ContainerID="6ec47de3d6a78914ca04086098543442b993c48675e3c14ce657165c4d016a8f" Namespace="calico-system" Pod="calico-kube-controllers-58c8cb96d-rqnqs" WorkloadEndpoint="172--233--222--141-k8s-calico--kube--controllers--58c8cb96d--rqnqs-eth0" May 17 00:22:18.560230 containerd[1467]: 2025-05-17 00:22:18.529 [INFO][4482] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6ec47de3d6a78914ca04086098543442b993c48675e3c14ce657165c4d016a8f" Namespace="calico-system" Pod="calico-kube-controllers-58c8cb96d-rqnqs" WorkloadEndpoint="172--233--222--141-k8s-calico--kube--controllers--58c8cb96d--rqnqs-eth0" May 17 00:22:18.560230 containerd[1467]: 2025-05-17 00:22:18.529 [INFO][4482] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6ec47de3d6a78914ca04086098543442b993c48675e3c14ce657165c4d016a8f" Namespace="calico-system" Pod="calico-kube-controllers-58c8cb96d-rqnqs" WorkloadEndpoint="172--233--222--141-k8s-calico--kube--controllers--58c8cb96d--rqnqs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--141-k8s-calico--kube--controllers--58c8cb96d--rqnqs-eth0", GenerateName:"calico-kube-controllers-58c8cb96d-", Namespace:"calico-system", SelfLink:"", UID:"de4de8b9-3fd9-48eb-b6c1-3ea87c183557", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58c8cb96d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-141", ContainerID:"6ec47de3d6a78914ca04086098543442b993c48675e3c14ce657165c4d016a8f", Pod:"calico-kube-controllers-58c8cb96d-rqnqs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.24.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid1cdd338c99", MAC:"86:16:57:3b:fe:ed", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:18.560230 containerd[1467]: 2025-05-17 00:22:18.555 [INFO][4482] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6ec47de3d6a78914ca04086098543442b993c48675e3c14ce657165c4d016a8f" Namespace="calico-system" Pod="calico-kube-controllers-58c8cb96d-rqnqs" WorkloadEndpoint="172--233--222--141-k8s-calico--kube--controllers--58c8cb96d--rqnqs-eth0" May 17 00:22:18.576565 containerd[1467]: time="2025-05-17T00:22:18.575718847Z" level=info msg="CreateContainer within sandbox \"3fdb1d276a070a3a9c31219fc021f29cfa57976947fe9aaf279be40244cb818f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:22:18.607757 containerd[1467]: time="2025-05-17T00:22:18.607717747Z" level=info msg="CreateContainer within sandbox \"3fdb1d276a070a3a9c31219fc021f29cfa57976947fe9aaf279be40244cb818f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2d5305d4afd137fa99d6c1bc297f304ef85336192fb0e8b374517bfa2d3b6f8a\"" May 17 00:22:18.609154 containerd[1467]: time="2025-05-17T00:22:18.609103851Z" level=info msg="StartContainer for \"2d5305d4afd137fa99d6c1bc297f304ef85336192fb0e8b374517bfa2d3b6f8a\"" May 17 00:22:18.624682 containerd[1467]: time="2025-05-17T00:22:18.624075355Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:22:18.625545 containerd[1467]: time="2025-05-17T00:22:18.625102396Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:22:18.625545 containerd[1467]: time="2025-05-17T00:22:18.625120138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:18.627337 containerd[1467]: time="2025-05-17T00:22:18.625191999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:18.650804 systemd[1]: Started cri-containerd-6ec47de3d6a78914ca04086098543442b993c48675e3c14ce657165c4d016a8f.scope - libcontainer container 6ec47de3d6a78914ca04086098543442b993c48675e3c14ce657165c4d016a8f. May 17 00:22:18.654146 systemd[1]: Started cri-containerd-2d5305d4afd137fa99d6c1bc297f304ef85336192fb0e8b374517bfa2d3b6f8a.scope - libcontainer container 2d5305d4afd137fa99d6c1bc297f304ef85336192fb0e8b374517bfa2d3b6f8a. May 17 00:22:18.697330 containerd[1467]: time="2025-05-17T00:22:18.697302345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58c8cb96d-rqnqs,Uid:de4de8b9-3fd9-48eb-b6c1-3ea87c183557,Namespace:calico-system,Attempt:1,} returns sandbox id \"6ec47de3d6a78914ca04086098543442b993c48675e3c14ce657165c4d016a8f\"" May 17 00:22:18.700448 containerd[1467]: time="2025-05-17T00:22:18.699552197Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 17 00:22:18.705017 containerd[1467]: time="2025-05-17T00:22:18.704988436Z" level=info msg="StartContainer for \"2d5305d4afd137fa99d6c1bc297f304ef85336192fb0e8b374517bfa2d3b6f8a\" returns successfully" May 17 00:22:19.276236 containerd[1467]: time="2025-05-17T00:22:19.275599230Z" level=info msg="StopPodSandbox for \"47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb\"" May 17 00:22:19.277482 containerd[1467]: time="2025-05-17T00:22:19.277132420Z" level=info msg="StopPodSandbox for \"43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11\"" May 17 00:22:19.457870 containerd[1467]: 2025-05-17 00:22:19.342 [INFO][4646] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11" May 17 00:22:19.457870 containerd[1467]: 2025-05-17 00:22:19.342 [INFO][4646] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11" iface="eth0" netns="/var/run/netns/cni-b5b40ed4-73a7-c715-1e2c-64ad7498bd10" May 17 00:22:19.457870 containerd[1467]: 2025-05-17 00:22:19.343 [INFO][4646] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11" iface="eth0" netns="/var/run/netns/cni-b5b40ed4-73a7-c715-1e2c-64ad7498bd10" May 17 00:22:19.457870 containerd[1467]: 2025-05-17 00:22:19.343 [INFO][4646] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11" iface="eth0" netns="/var/run/netns/cni-b5b40ed4-73a7-c715-1e2c-64ad7498bd10" May 17 00:22:19.457870 containerd[1467]: 2025-05-17 00:22:19.343 [INFO][4646] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11" May 17 00:22:19.457870 containerd[1467]: 2025-05-17 00:22:19.343 [INFO][4646] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11" May 17 00:22:19.457870 containerd[1467]: 2025-05-17 00:22:19.433 [INFO][4664] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11" HandleID="k8s-pod-network.43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11" Workload="172--233--222--141-k8s-calico--apiserver--7d8b46c577--g5mnw-eth0" May 17 00:22:19.457870 containerd[1467]: 2025-05-17 00:22:19.433 [INFO][4664] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:19.457870 containerd[1467]: 2025-05-17 00:22:19.433 [INFO][4664] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:19.457870 containerd[1467]: 2025-05-17 00:22:19.442 [WARNING][4664] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11" HandleID="k8s-pod-network.43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11" Workload="172--233--222--141-k8s-calico--apiserver--7d8b46c577--g5mnw-eth0" May 17 00:22:19.457870 containerd[1467]: 2025-05-17 00:22:19.443 [INFO][4664] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11" HandleID="k8s-pod-network.43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11" Workload="172--233--222--141-k8s-calico--apiserver--7d8b46c577--g5mnw-eth0" May 17 00:22:19.457870 containerd[1467]: 2025-05-17 00:22:19.447 [INFO][4664] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:19.457870 containerd[1467]: 2025-05-17 00:22:19.452 [INFO][4646] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11" May 17 00:22:19.459682 containerd[1467]: time="2025-05-17T00:22:19.458937138Z" level=info msg="TearDown network for sandbox \"43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11\" successfully" May 17 00:22:19.459682 containerd[1467]: time="2025-05-17T00:22:19.458965092Z" level=info msg="StopPodSandbox for \"43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11\" returns successfully" May 17 00:22:19.462482 containerd[1467]: time="2025-05-17T00:22:19.462375330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d8b46c577-g5mnw,Uid:84bea557-3a73-4c30-b7d9-60dca7b8e6f7,Namespace:calico-apiserver,Attempt:1,}" May 17 00:22:19.465623 systemd[1]: run-netns-cni\x2db5b40ed4\x2d73a7\x2dc715\x2d1e2c\x2d64ad7498bd10.mount: Deactivated successfully. May 17 00:22:19.495374 containerd[1467]: 2025-05-17 00:22:19.389 [INFO][4654] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb" May 17 00:22:19.495374 containerd[1467]: 2025-05-17 00:22:19.390 [INFO][4654] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb" iface="eth0" netns="/var/run/netns/cni-9251e324-ba9d-4fc1-1a48-fecf61928c7f" May 17 00:22:19.495374 containerd[1467]: 2025-05-17 00:22:19.390 [INFO][4654] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb" iface="eth0" netns="/var/run/netns/cni-9251e324-ba9d-4fc1-1a48-fecf61928c7f" May 17 00:22:19.495374 containerd[1467]: 2025-05-17 00:22:19.391 [INFO][4654] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb" iface="eth0" netns="/var/run/netns/cni-9251e324-ba9d-4fc1-1a48-fecf61928c7f" May 17 00:22:19.495374 containerd[1467]: 2025-05-17 00:22:19.391 [INFO][4654] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb" May 17 00:22:19.495374 containerd[1467]: 2025-05-17 00:22:19.391 [INFO][4654] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb" May 17 00:22:19.495374 containerd[1467]: 2025-05-17 00:22:19.452 [INFO][4669] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb" HandleID="k8s-pod-network.47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb" Workload="172--233--222--141-k8s-goldmane--78d55f7ddc--rx28r-eth0" May 17 00:22:19.495374 containerd[1467]: 2025-05-17 00:22:19.454 [INFO][4669] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:19.495374 containerd[1467]: 2025-05-17 00:22:19.454 [INFO][4669] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:19.495374 containerd[1467]: 2025-05-17 00:22:19.466 [WARNING][4669] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb" HandleID="k8s-pod-network.47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb" Workload="172--233--222--141-k8s-goldmane--78d55f7ddc--rx28r-eth0" May 17 00:22:19.495374 containerd[1467]: 2025-05-17 00:22:19.467 [INFO][4669] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb" HandleID="k8s-pod-network.47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb" Workload="172--233--222--141-k8s-goldmane--78d55f7ddc--rx28r-eth0" May 17 00:22:19.495374 containerd[1467]: 2025-05-17 00:22:19.470 [INFO][4669] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:19.495374 containerd[1467]: 2025-05-17 00:22:19.483 [INFO][4654] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb" May 17 00:22:19.501474 containerd[1467]: time="2025-05-17T00:22:19.501031943Z" level=info msg="TearDown network for sandbox \"47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb\" successfully" May 17 00:22:19.501474 containerd[1467]: time="2025-05-17T00:22:19.501059987Z" level=info msg="StopPodSandbox for \"47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb\" returns successfully" May 17 00:22:19.502895 containerd[1467]: time="2025-05-17T00:22:19.502618390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-rx28r,Uid:a8d2447a-ad8d-4842-8426-24362dceb355,Namespace:calico-system,Attempt:1,}" May 17 00:22:19.503129 systemd[1]: run-netns-cni\x2d9251e324\x2dba9d\x2d4fc1\x2d1a48\x2dfecf61928c7f.mount: Deactivated successfully. May 17 00:22:19.514986 kubelet[2505]: E0517 00:22:19.514609 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:22:19.520214 kubelet[2505]: E0517 00:22:19.520076 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:22:19.522477 kubelet[2505]: E0517 00:22:19.521962 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:22:19.534986 kubelet[2505]: I0517 00:22:19.534903 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7d8b46c577-4dr29" podStartSLOduration=20.652755437 podStartE2EDuration="22.534893102s" podCreationTimestamp="2025-05-17 00:21:57 +0000 UTC" firstStartedPulling="2025-05-17 00:22:16.686465884 +0000 UTC m=+35.506371883" lastFinishedPulling="2025-05-17 00:22:18.568603549 +0000 UTC m=+37.388509548" observedRunningTime="2025-05-17 00:22:19.534702695 +0000 UTC m=+38.354608694" watchObservedRunningTime="2025-05-17 00:22:19.534893102 +0000 UTC m=+38.354799101" May 17 00:22:19.590851 systemd-networkd[1392]: cali7f4d210b56e: Gained IPv6LL May 17 00:22:19.741047 systemd-networkd[1392]: caliedae96f96ad: Link UP May 17 00:22:19.743778 systemd-networkd[1392]: caliedae96f96ad: Gained carrier May 17 00:22:19.762538 containerd[1467]: 2025-05-17 00:22:19.568 [INFO][4677] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:22:19.762538 containerd[1467]: 2025-05-17 00:22:19.597 [INFO][4677] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--233--222--141-k8s-calico--apiserver--7d8b46c577--g5mnw-eth0 calico-apiserver-7d8b46c577- calico-apiserver 84bea557-3a73-4c30-b7d9-60dca7b8e6f7 1002 0 2025-05-17 00:21:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d8b46c577 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-233-222-141 calico-apiserver-7d8b46c577-g5mnw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliedae96f96ad [] [] }} ContainerID="485b7908bb7bf7ecb8ae39c834bb846d4bf88a33d3102f31d02e9a41b1dcc1e6" Namespace="calico-apiserver" Pod="calico-apiserver-7d8b46c577-g5mnw" WorkloadEndpoint="172--233--222--141-k8s-calico--apiserver--7d8b46c577--g5mnw-" May 17 00:22:19.762538 containerd[1467]: 2025-05-17 00:22:19.597 [INFO][4677] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="485b7908bb7bf7ecb8ae39c834bb846d4bf88a33d3102f31d02e9a41b1dcc1e6" Namespace="calico-apiserver" Pod="calico-apiserver-7d8b46c577-g5mnw" WorkloadEndpoint="172--233--222--141-k8s-calico--apiserver--7d8b46c577--g5mnw-eth0" May 17 00:22:19.762538 containerd[1467]: 2025-05-17 00:22:19.656 [INFO][4710] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="485b7908bb7bf7ecb8ae39c834bb846d4bf88a33d3102f31d02e9a41b1dcc1e6" HandleID="k8s-pod-network.485b7908bb7bf7ecb8ae39c834bb846d4bf88a33d3102f31d02e9a41b1dcc1e6" Workload="172--233--222--141-k8s-calico--apiserver--7d8b46c577--g5mnw-eth0" May 17 00:22:19.762538 containerd[1467]: 2025-05-17 00:22:19.657 [INFO][4710] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="485b7908bb7bf7ecb8ae39c834bb846d4bf88a33d3102f31d02e9a41b1dcc1e6" HandleID="k8s-pod-network.485b7908bb7bf7ecb8ae39c834bb846d4bf88a33d3102f31d02e9a41b1dcc1e6" Workload="172--233--222--141-k8s-calico--apiserver--7d8b46c577--g5mnw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d3790), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-233-222-141", "pod":"calico-apiserver-7d8b46c577-g5mnw", "timestamp":"2025-05-17 00:22:19.656909507 +0000 UTC"}, Hostname:"172-233-222-141", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:22:19.762538 containerd[1467]: 2025-05-17 00:22:19.657 [INFO][4710] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:19.762538 containerd[1467]: 2025-05-17 00:22:19.657 [INFO][4710] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:19.762538 containerd[1467]: 2025-05-17 00:22:19.657 [INFO][4710] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-233-222-141' May 17 00:22:19.762538 containerd[1467]: 2025-05-17 00:22:19.666 [INFO][4710] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.485b7908bb7bf7ecb8ae39c834bb846d4bf88a33d3102f31d02e9a41b1dcc1e6" host="172-233-222-141" May 17 00:22:19.762538 containerd[1467]: 2025-05-17 00:22:19.671 [INFO][4710] ipam/ipam.go 394: Looking up existing affinities for host host="172-233-222-141" May 17 00:22:19.762538 containerd[1467]: 2025-05-17 00:22:19.675 [INFO][4710] ipam/ipam.go 511: Trying affinity for 192.168.24.192/26 host="172-233-222-141" May 17 00:22:19.762538 containerd[1467]: 2025-05-17 00:22:19.677 [INFO][4710] ipam/ipam.go 158: Attempting to load block cidr=192.168.24.192/26 host="172-233-222-141" May 17 00:22:19.762538 containerd[1467]: 2025-05-17 00:22:19.680 [INFO][4710] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.24.192/26 host="172-233-222-141" May 17 00:22:19.762538 containerd[1467]: 2025-05-17 00:22:19.681 [INFO][4710] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.24.192/26 handle="k8s-pod-network.485b7908bb7bf7ecb8ae39c834bb846d4bf88a33d3102f31d02e9a41b1dcc1e6" host="172-233-222-141" May 17 00:22:19.762538 containerd[1467]: 2025-05-17 00:22:19.684 [INFO][4710] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.485b7908bb7bf7ecb8ae39c834bb846d4bf88a33d3102f31d02e9a41b1dcc1e6 May 17 00:22:19.762538 containerd[1467]: 2025-05-17 00:22:19.690 [INFO][4710] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.24.192/26 handle="k8s-pod-network.485b7908bb7bf7ecb8ae39c834bb846d4bf88a33d3102f31d02e9a41b1dcc1e6" host="172-233-222-141" May 17 00:22:19.762538 containerd[1467]: 2025-05-17 00:22:19.701 [INFO][4710] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.24.198/26] block=192.168.24.192/26 handle="k8s-pod-network.485b7908bb7bf7ecb8ae39c834bb846d4bf88a33d3102f31d02e9a41b1dcc1e6" host="172-233-222-141" May 17 00:22:19.762538 containerd[1467]: 2025-05-17 00:22:19.702 [INFO][4710] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.24.198/26] handle="k8s-pod-network.485b7908bb7bf7ecb8ae39c834bb846d4bf88a33d3102f31d02e9a41b1dcc1e6" host="172-233-222-141" May 17 00:22:19.762538 containerd[1467]: 2025-05-17 00:22:19.702 [INFO][4710] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:19.762538 containerd[1467]: 2025-05-17 00:22:19.702 [INFO][4710] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.24.198/26] IPv6=[] ContainerID="485b7908bb7bf7ecb8ae39c834bb846d4bf88a33d3102f31d02e9a41b1dcc1e6" HandleID="k8s-pod-network.485b7908bb7bf7ecb8ae39c834bb846d4bf88a33d3102f31d02e9a41b1dcc1e6" Workload="172--233--222--141-k8s-calico--apiserver--7d8b46c577--g5mnw-eth0" May 17 00:22:19.763167 containerd[1467]: 2025-05-17 00:22:19.713 [INFO][4677] cni-plugin/k8s.go 418: Populated endpoint ContainerID="485b7908bb7bf7ecb8ae39c834bb846d4bf88a33d3102f31d02e9a41b1dcc1e6" Namespace="calico-apiserver" Pod="calico-apiserver-7d8b46c577-g5mnw" WorkloadEndpoint="172--233--222--141-k8s-calico--apiserver--7d8b46c577--g5mnw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--141-k8s-calico--apiserver--7d8b46c577--g5mnw-eth0", GenerateName:"calico-apiserver-7d8b46c577-", Namespace:"calico-apiserver", SelfLink:"", UID:"84bea557-3a73-4c30-b7d9-60dca7b8e6f7", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d8b46c577", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-141", ContainerID:"", Pod:"calico-apiserver-7d8b46c577-g5mnw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliedae96f96ad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:19.763167 containerd[1467]: 2025-05-17 00:22:19.715 [INFO][4677] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.24.198/32] ContainerID="485b7908bb7bf7ecb8ae39c834bb846d4bf88a33d3102f31d02e9a41b1dcc1e6" Namespace="calico-apiserver" Pod="calico-apiserver-7d8b46c577-g5mnw" WorkloadEndpoint="172--233--222--141-k8s-calico--apiserver--7d8b46c577--g5mnw-eth0" May 17 00:22:19.763167 containerd[1467]: 2025-05-17 00:22:19.715 [INFO][4677] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliedae96f96ad ContainerID="485b7908bb7bf7ecb8ae39c834bb846d4bf88a33d3102f31d02e9a41b1dcc1e6" Namespace="calico-apiserver" Pod="calico-apiserver-7d8b46c577-g5mnw" WorkloadEndpoint="172--233--222--141-k8s-calico--apiserver--7d8b46c577--g5mnw-eth0" May 17 00:22:19.763167 containerd[1467]: 2025-05-17 00:22:19.744 [INFO][4677] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="485b7908bb7bf7ecb8ae39c834bb846d4bf88a33d3102f31d02e9a41b1dcc1e6" Namespace="calico-apiserver" Pod="calico-apiserver-7d8b46c577-g5mnw" WorkloadEndpoint="172--233--222--141-k8s-calico--apiserver--7d8b46c577--g5mnw-eth0" May 17 00:22:19.763167 containerd[1467]: 2025-05-17 00:22:19.746 [INFO][4677] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="485b7908bb7bf7ecb8ae39c834bb846d4bf88a33d3102f31d02e9a41b1dcc1e6" Namespace="calico-apiserver" Pod="calico-apiserver-7d8b46c577-g5mnw" WorkloadEndpoint="172--233--222--141-k8s-calico--apiserver--7d8b46c577--g5mnw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--141-k8s-calico--apiserver--7d8b46c577--g5mnw-eth0", GenerateName:"calico-apiserver-7d8b46c577-", Namespace:"calico-apiserver", SelfLink:"", UID:"84bea557-3a73-4c30-b7d9-60dca7b8e6f7", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d8b46c577", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-141", ContainerID:"485b7908bb7bf7ecb8ae39c834bb846d4bf88a33d3102f31d02e9a41b1dcc1e6", Pod:"calico-apiserver-7d8b46c577-g5mnw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliedae96f96ad", MAC:"d2:6a:4f:b8:ec:39", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:19.763167 containerd[1467]: 2025-05-17 00:22:19.757 [INFO][4677] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="485b7908bb7bf7ecb8ae39c834bb846d4bf88a33d3102f31d02e9a41b1dcc1e6" Namespace="calico-apiserver" Pod="calico-apiserver-7d8b46c577-g5mnw" WorkloadEndpoint="172--233--222--141-k8s-calico--apiserver--7d8b46c577--g5mnw-eth0" May 17 00:22:19.830253 systemd-networkd[1392]: cali057ef22c42e: Link UP May 17 00:22:19.831519 systemd-networkd[1392]: cali057ef22c42e: Gained carrier May 17 00:22:19.844789 containerd[1467]: time="2025-05-17T00:22:19.844568143Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:22:19.844789 containerd[1467]: time="2025-05-17T00:22:19.844609419Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:22:19.844789 containerd[1467]: time="2025-05-17T00:22:19.844620910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:19.844789 containerd[1467]: time="2025-05-17T00:22:19.844705602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:19.849109 systemd-networkd[1392]: calid1cdd338c99: Gained IPv6LL May 17 00:22:19.861974 containerd[1467]: 2025-05-17 00:22:19.616 [INFO][4688] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:22:19.861974 containerd[1467]: 2025-05-17 00:22:19.671 [INFO][4688] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--233--222--141-k8s-goldmane--78d55f7ddc--rx28r-eth0 goldmane-78d55f7ddc- calico-system a8d2447a-ad8d-4842-8426-24362dceb355 1003 0 2025-05-17 00:21:58 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:78d55f7ddc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 172-233-222-141 goldmane-78d55f7ddc-rx28r eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali057ef22c42e [] [] }} ContainerID="7be587148a377450037f0d3ab4ba64e4210d4682dbecd6737ec420e32a6dc15a" Namespace="calico-system" Pod="goldmane-78d55f7ddc-rx28r" WorkloadEndpoint="172--233--222--141-k8s-goldmane--78d55f7ddc--rx28r-" May 17 00:22:19.861974 containerd[1467]: 2025-05-17 00:22:19.672 [INFO][4688] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7be587148a377450037f0d3ab4ba64e4210d4682dbecd6737ec420e32a6dc15a" Namespace="calico-system" Pod="goldmane-78d55f7ddc-rx28r" WorkloadEndpoint="172--233--222--141-k8s-goldmane--78d55f7ddc--rx28r-eth0" May 17 00:22:19.861974 containerd[1467]: 2025-05-17 00:22:19.755 [INFO][4720] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7be587148a377450037f0d3ab4ba64e4210d4682dbecd6737ec420e32a6dc15a" HandleID="k8s-pod-network.7be587148a377450037f0d3ab4ba64e4210d4682dbecd6737ec420e32a6dc15a" Workload="172--233--222--141-k8s-goldmane--78d55f7ddc--rx28r-eth0" May 17 00:22:19.861974 containerd[1467]: 2025-05-17 00:22:19.759 [INFO][4720] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7be587148a377450037f0d3ab4ba64e4210d4682dbecd6737ec420e32a6dc15a" HandleID="k8s-pod-network.7be587148a377450037f0d3ab4ba64e4210d4682dbecd6737ec420e32a6dc15a" Workload="172--233--222--141-k8s-goldmane--78d55f7ddc--rx28r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9730), Attrs:map[string]string{"namespace":"calico-system", "node":"172-233-222-141", "pod":"goldmane-78d55f7ddc-rx28r", "timestamp":"2025-05-17 00:22:19.755543461 +0000 UTC"}, Hostname:"172-233-222-141", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:22:19.861974 containerd[1467]: 2025-05-17 00:22:19.759 [INFO][4720] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:19.861974 containerd[1467]: 2025-05-17 00:22:19.759 [INFO][4720] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:19.861974 containerd[1467]: 2025-05-17 00:22:19.759 [INFO][4720] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-233-222-141' May 17 00:22:19.861974 containerd[1467]: 2025-05-17 00:22:19.770 [INFO][4720] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7be587148a377450037f0d3ab4ba64e4210d4682dbecd6737ec420e32a6dc15a" host="172-233-222-141" May 17 00:22:19.861974 containerd[1467]: 2025-05-17 00:22:19.777 [INFO][4720] ipam/ipam.go 394: Looking up existing affinities for host host="172-233-222-141" May 17 00:22:19.861974 containerd[1467]: 2025-05-17 00:22:19.783 [INFO][4720] ipam/ipam.go 511: Trying affinity for 192.168.24.192/26 host="172-233-222-141" May 17 00:22:19.861974 containerd[1467]: 2025-05-17 00:22:19.785 [INFO][4720] ipam/ipam.go 158: Attempting to load block cidr=192.168.24.192/26 host="172-233-222-141" May 17 00:22:19.861974 containerd[1467]: 2025-05-17 00:22:19.789 [INFO][4720] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.24.192/26 host="172-233-222-141" May 17 00:22:19.861974 containerd[1467]: 2025-05-17 00:22:19.789 [INFO][4720] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.24.192/26 handle="k8s-pod-network.7be587148a377450037f0d3ab4ba64e4210d4682dbecd6737ec420e32a6dc15a" host="172-233-222-141" May 17 00:22:19.861974 containerd[1467]: 2025-05-17 00:22:19.795 [INFO][4720] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7be587148a377450037f0d3ab4ba64e4210d4682dbecd6737ec420e32a6dc15a May 17 00:22:19.861974 containerd[1467]: 2025-05-17 00:22:19.802 [INFO][4720] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.24.192/26 handle="k8s-pod-network.7be587148a377450037f0d3ab4ba64e4210d4682dbecd6737ec420e32a6dc15a" host="172-233-222-141" May 17 00:22:19.861974 containerd[1467]: 2025-05-17 00:22:19.809 [INFO][4720] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.24.199/26] block=192.168.24.192/26 handle="k8s-pod-network.7be587148a377450037f0d3ab4ba64e4210d4682dbecd6737ec420e32a6dc15a" host="172-233-222-141" May 17 00:22:19.861974 containerd[1467]: 2025-05-17 00:22:19.809 [INFO][4720] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.24.199/26] handle="k8s-pod-network.7be587148a377450037f0d3ab4ba64e4210d4682dbecd6737ec420e32a6dc15a" host="172-233-222-141" May 17 00:22:19.861974 containerd[1467]: 2025-05-17 00:22:19.809 [INFO][4720] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:19.861974 containerd[1467]: 2025-05-17 00:22:19.809 [INFO][4720] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.24.199/26] IPv6=[] ContainerID="7be587148a377450037f0d3ab4ba64e4210d4682dbecd6737ec420e32a6dc15a" HandleID="k8s-pod-network.7be587148a377450037f0d3ab4ba64e4210d4682dbecd6737ec420e32a6dc15a" Workload="172--233--222--141-k8s-goldmane--78d55f7ddc--rx28r-eth0" May 17 00:22:19.862991 containerd[1467]: 2025-05-17 00:22:19.816 [INFO][4688] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7be587148a377450037f0d3ab4ba64e4210d4682dbecd6737ec420e32a6dc15a" Namespace="calico-system" Pod="goldmane-78d55f7ddc-rx28r" WorkloadEndpoint="172--233--222--141-k8s-goldmane--78d55f7ddc--rx28r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--141-k8s-goldmane--78d55f7ddc--rx28r-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"a8d2447a-ad8d-4842-8426-24362dceb355", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-141", ContainerID:"", Pod:"goldmane-78d55f7ddc-rx28r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.24.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali057ef22c42e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:19.862991 containerd[1467]: 2025-05-17 00:22:19.817 [INFO][4688] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.24.199/32] ContainerID="7be587148a377450037f0d3ab4ba64e4210d4682dbecd6737ec420e32a6dc15a" Namespace="calico-system" Pod="goldmane-78d55f7ddc-rx28r" WorkloadEndpoint="172--233--222--141-k8s-goldmane--78d55f7ddc--rx28r-eth0" May 17 00:22:19.862991 containerd[1467]: 2025-05-17 00:22:19.817 [INFO][4688] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali057ef22c42e ContainerID="7be587148a377450037f0d3ab4ba64e4210d4682dbecd6737ec420e32a6dc15a" Namespace="calico-system" Pod="goldmane-78d55f7ddc-rx28r" WorkloadEndpoint="172--233--222--141-k8s-goldmane--78d55f7ddc--rx28r-eth0" May 17 00:22:19.862991 containerd[1467]: 2025-05-17 00:22:19.835 [INFO][4688] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7be587148a377450037f0d3ab4ba64e4210d4682dbecd6737ec420e32a6dc15a" Namespace="calico-system" Pod="goldmane-78d55f7ddc-rx28r" WorkloadEndpoint="172--233--222--141-k8s-goldmane--78d55f7ddc--rx28r-eth0" May 17 00:22:19.862991 containerd[1467]: 2025-05-17 00:22:19.836 [INFO][4688] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7be587148a377450037f0d3ab4ba64e4210d4682dbecd6737ec420e32a6dc15a" Namespace="calico-system" Pod="goldmane-78d55f7ddc-rx28r" WorkloadEndpoint="172--233--222--141-k8s-goldmane--78d55f7ddc--rx28r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--141-k8s-goldmane--78d55f7ddc--rx28r-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"a8d2447a-ad8d-4842-8426-24362dceb355", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-141", ContainerID:"7be587148a377450037f0d3ab4ba64e4210d4682dbecd6737ec420e32a6dc15a", Pod:"goldmane-78d55f7ddc-rx28r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.24.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali057ef22c42e", MAC:"4e:7c:0f:67:b5:91", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:19.862991 containerd[1467]: 2025-05-17 00:22:19.854 [INFO][4688] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7be587148a377450037f0d3ab4ba64e4210d4682dbecd6737ec420e32a6dc15a" Namespace="calico-system" Pod="goldmane-78d55f7ddc-rx28r" WorkloadEndpoint="172--233--222--141-k8s-goldmane--78d55f7ddc--rx28r-eth0" May 17 00:22:19.883800 systemd[1]: Started cri-containerd-485b7908bb7bf7ecb8ae39c834bb846d4bf88a33d3102f31d02e9a41b1dcc1e6.scope - libcontainer container 485b7908bb7bf7ecb8ae39c834bb846d4bf88a33d3102f31d02e9a41b1dcc1e6. May 17 00:22:19.912687 containerd[1467]: time="2025-05-17T00:22:19.911823232Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:22:19.912687 containerd[1467]: time="2025-05-17T00:22:19.911864088Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:22:19.912687 containerd[1467]: time="2025-05-17T00:22:19.911876890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:19.912687 containerd[1467]: time="2025-05-17T00:22:19.911935428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:19.953439 systemd[1]: Started cri-containerd-7be587148a377450037f0d3ab4ba64e4210d4682dbecd6737ec420e32a6dc15a.scope - libcontainer container 7be587148a377450037f0d3ab4ba64e4210d4682dbecd6737ec420e32a6dc15a. May 17 00:22:20.156728 containerd[1467]: time="2025-05-17T00:22:20.156607368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d8b46c577-g5mnw,Uid:84bea557-3a73-4c30-b7d9-60dca7b8e6f7,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"485b7908bb7bf7ecb8ae39c834bb846d4bf88a33d3102f31d02e9a41b1dcc1e6\"" May 17 00:22:20.186483 containerd[1467]: time="2025-05-17T00:22:20.186452342Z" level=info msg="CreateContainer within sandbox \"485b7908bb7bf7ecb8ae39c834bb846d4bf88a33d3102f31d02e9a41b1dcc1e6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:22:20.206765 containerd[1467]: time="2025-05-17T00:22:20.206735309Z" level=info msg="CreateContainer within sandbox \"485b7908bb7bf7ecb8ae39c834bb846d4bf88a33d3102f31d02e9a41b1dcc1e6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9c188b5640b6070aee7c96842551726368aa96698daabd21a3f74944e8178c26\"" May 17 00:22:20.207833 containerd[1467]: time="2025-05-17T00:22:20.207785154Z" level=info msg="StartContainer for \"9c188b5640b6070aee7c96842551726368aa96698daabd21a3f74944e8178c26\"" May 17 00:22:20.231750 kernel: bpftool[4836]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 17 00:22:20.279008 containerd[1467]: time="2025-05-17T00:22:20.278601526Z" level=info msg="StopPodSandbox for \"1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f\"" May 17 00:22:20.290233 systemd[1]: Started cri-containerd-9c188b5640b6070aee7c96842551726368aa96698daabd21a3f74944e8178c26.scope - libcontainer container 9c188b5640b6070aee7c96842551726368aa96698daabd21a3f74944e8178c26. May 17 00:22:20.401123 containerd[1467]: time="2025-05-17T00:22:20.401055829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-rx28r,Uid:a8d2447a-ad8d-4842-8426-24362dceb355,Namespace:calico-system,Attempt:1,} returns sandbox id \"7be587148a377450037f0d3ab4ba64e4210d4682dbecd6737ec420e32a6dc15a\"" May 17 00:22:20.493173 containerd[1467]: 2025-05-17 00:22:20.414 [INFO][4869] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f" May 17 00:22:20.493173 containerd[1467]: 2025-05-17 00:22:20.415 [INFO][4869] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f" iface="eth0" netns="/var/run/netns/cni-7e1824e9-076b-378a-daa3-d5e21e983e3e" May 17 00:22:20.493173 containerd[1467]: 2025-05-17 00:22:20.416 [INFO][4869] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f" iface="eth0" netns="/var/run/netns/cni-7e1824e9-076b-378a-daa3-d5e21e983e3e" May 17 00:22:20.493173 containerd[1467]: 2025-05-17 00:22:20.416 [INFO][4869] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f" iface="eth0" netns="/var/run/netns/cni-7e1824e9-076b-378a-daa3-d5e21e983e3e" May 17 00:22:20.493173 containerd[1467]: 2025-05-17 00:22:20.417 [INFO][4869] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f" May 17 00:22:20.493173 containerd[1467]: 2025-05-17 00:22:20.417 [INFO][4869] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f" May 17 00:22:20.493173 containerd[1467]: 2025-05-17 00:22:20.469 [INFO][4890] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f" HandleID="k8s-pod-network.1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f" Workload="172--233--222--141-k8s-csi--node--driver--hrmvr-eth0" May 17 00:22:20.493173 containerd[1467]: 2025-05-17 00:22:20.469 [INFO][4890] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:20.493173 containerd[1467]: 2025-05-17 00:22:20.469 [INFO][4890] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:20.493173 containerd[1467]: 2025-05-17 00:22:20.476 [WARNING][4890] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f" HandleID="k8s-pod-network.1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f" Workload="172--233--222--141-k8s-csi--node--driver--hrmvr-eth0" May 17 00:22:20.493173 containerd[1467]: 2025-05-17 00:22:20.476 [INFO][4890] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f" HandleID="k8s-pod-network.1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f" Workload="172--233--222--141-k8s-csi--node--driver--hrmvr-eth0" May 17 00:22:20.493173 containerd[1467]: 2025-05-17 00:22:20.479 [INFO][4890] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:20.493173 containerd[1467]: 2025-05-17 00:22:20.485 [INFO][4869] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f" May 17 00:22:20.497686 containerd[1467]: time="2025-05-17T00:22:20.493966638Z" level=info msg="TearDown network for sandbox \"1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f\" successfully" May 17 00:22:20.497686 containerd[1467]: time="2025-05-17T00:22:20.493994152Z" level=info msg="StopPodSandbox for \"1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f\" returns successfully" May 17 00:22:20.498529 containerd[1467]: time="2025-05-17T00:22:20.498495037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hrmvr,Uid:c739a616-a481-41f3-a04d-de803459e701,Namespace:calico-system,Attempt:1,}" May 17 00:22:20.498973 systemd[1]: run-netns-cni\x2d7e1824e9\x2d076b\x2d378a\x2ddaa3\x2dd5e21e983e3e.mount: Deactivated successfully. May 17 00:22:20.533488 kubelet[2505]: E0517 00:22:20.532728 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:22:20.534680 kubelet[2505]: I0517 00:22:20.534400 2505 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:22:20.606565 containerd[1467]: time="2025-05-17T00:22:20.606496552Z" level=info msg="StartContainer for \"9c188b5640b6070aee7c96842551726368aa96698daabd21a3f74944e8178c26\" returns successfully" May 17 00:22:20.716043 systemd-networkd[1392]: cali8f102dc718d: Link UP May 17 00:22:20.718387 systemd-networkd[1392]: cali8f102dc718d: Gained carrier May 17 00:22:20.748537 containerd[1467]: 2025-05-17 00:22:20.592 [INFO][4897] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--233--222--141-k8s-csi--node--driver--hrmvr-eth0 csi-node-driver- calico-system c739a616-a481-41f3-a04d-de803459e701 1021 0 2025-05-17 00:21:59 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78f6f74485 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-233-222-141 csi-node-driver-hrmvr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8f102dc718d [] [] }} ContainerID="ea56462c3d641fccfc4b2d8c6454f404ca398aec944745ea5ed32114dee8968f" Namespace="calico-system" Pod="csi-node-driver-hrmvr" WorkloadEndpoint="172--233--222--141-k8s-csi--node--driver--hrmvr-" May 17 00:22:20.748537 containerd[1467]: 2025-05-17 00:22:20.594 [INFO][4897] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ea56462c3d641fccfc4b2d8c6454f404ca398aec944745ea5ed32114dee8968f" Namespace="calico-system" Pod="csi-node-driver-hrmvr" WorkloadEndpoint="172--233--222--141-k8s-csi--node--driver--hrmvr-eth0" May 17 00:22:20.748537 containerd[1467]: 2025-05-17 00:22:20.648 [INFO][4916] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ea56462c3d641fccfc4b2d8c6454f404ca398aec944745ea5ed32114dee8968f" HandleID="k8s-pod-network.ea56462c3d641fccfc4b2d8c6454f404ca398aec944745ea5ed32114dee8968f" Workload="172--233--222--141-k8s-csi--node--driver--hrmvr-eth0" May 17 00:22:20.748537 containerd[1467]: 2025-05-17 00:22:20.648 [INFO][4916] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ea56462c3d641fccfc4b2d8c6454f404ca398aec944745ea5ed32114dee8968f" HandleID="k8s-pod-network.ea56462c3d641fccfc4b2d8c6454f404ca398aec944745ea5ed32114dee8968f" Workload="172--233--222--141-k8s-csi--node--driver--hrmvr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9990), Attrs:map[string]string{"namespace":"calico-system", "node":"172-233-222-141", "pod":"csi-node-driver-hrmvr", "timestamp":"2025-05-17 00:22:20.648539269 +0000 UTC"}, Hostname:"172-233-222-141", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:22:20.748537 containerd[1467]: 2025-05-17 00:22:20.648 [INFO][4916] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:20.748537 containerd[1467]: 2025-05-17 00:22:20.649 [INFO][4916] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:20.748537 containerd[1467]: 2025-05-17 00:22:20.649 [INFO][4916] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-233-222-141' May 17 00:22:20.748537 containerd[1467]: 2025-05-17 00:22:20.661 [INFO][4916] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ea56462c3d641fccfc4b2d8c6454f404ca398aec944745ea5ed32114dee8968f" host="172-233-222-141" May 17 00:22:20.748537 containerd[1467]: 2025-05-17 00:22:20.669 [INFO][4916] ipam/ipam.go 394: Looking up existing affinities for host host="172-233-222-141" May 17 00:22:20.748537 containerd[1467]: 2025-05-17 00:22:20.677 [INFO][4916] ipam/ipam.go 511: Trying affinity for 192.168.24.192/26 host="172-233-222-141" May 17 00:22:20.748537 containerd[1467]: 2025-05-17 00:22:20.679 [INFO][4916] ipam/ipam.go 158: Attempting to load block cidr=192.168.24.192/26 host="172-233-222-141" May 17 00:22:20.748537 containerd[1467]: 2025-05-17 00:22:20.681 [INFO][4916] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.24.192/26 host="172-233-222-141" May 17 00:22:20.748537 containerd[1467]: 2025-05-17 00:22:20.681 [INFO][4916] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.24.192/26 handle="k8s-pod-network.ea56462c3d641fccfc4b2d8c6454f404ca398aec944745ea5ed32114dee8968f" host="172-233-222-141" May 17 00:22:20.748537 containerd[1467]: 2025-05-17 00:22:20.683 [INFO][4916] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ea56462c3d641fccfc4b2d8c6454f404ca398aec944745ea5ed32114dee8968f May 17 00:22:20.748537 containerd[1467]: 2025-05-17 00:22:20.690 [INFO][4916] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.24.192/26 handle="k8s-pod-network.ea56462c3d641fccfc4b2d8c6454f404ca398aec944745ea5ed32114dee8968f" host="172-233-222-141" May 17 00:22:20.748537 containerd[1467]: 2025-05-17 00:22:20.698 [INFO][4916] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.24.200/26] block=192.168.24.192/26 handle="k8s-pod-network.ea56462c3d641fccfc4b2d8c6454f404ca398aec944745ea5ed32114dee8968f" host="172-233-222-141" May 17 00:22:20.748537 containerd[1467]: 2025-05-17 00:22:20.698 [INFO][4916] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.24.200/26] handle="k8s-pod-network.ea56462c3d641fccfc4b2d8c6454f404ca398aec944745ea5ed32114dee8968f" host="172-233-222-141" May 17 00:22:20.748537 containerd[1467]: 2025-05-17 00:22:20.698 [INFO][4916] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:20.748537 containerd[1467]: 2025-05-17 00:22:20.698 [INFO][4916] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.24.200/26] IPv6=[] ContainerID="ea56462c3d641fccfc4b2d8c6454f404ca398aec944745ea5ed32114dee8968f" HandleID="k8s-pod-network.ea56462c3d641fccfc4b2d8c6454f404ca398aec944745ea5ed32114dee8968f" Workload="172--233--222--141-k8s-csi--node--driver--hrmvr-eth0" May 17 00:22:20.750574 containerd[1467]: 2025-05-17 00:22:20.707 [INFO][4897] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ea56462c3d641fccfc4b2d8c6454f404ca398aec944745ea5ed32114dee8968f" Namespace="calico-system" Pod="csi-node-driver-hrmvr" WorkloadEndpoint="172--233--222--141-k8s-csi--node--driver--hrmvr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--141-k8s-csi--node--driver--hrmvr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c739a616-a481-41f3-a04d-de803459e701", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-141", ContainerID:"", Pod:"csi-node-driver-hrmvr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.24.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8f102dc718d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:20.750574 containerd[1467]: 2025-05-17 00:22:20.708 [INFO][4897] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.24.200/32] ContainerID="ea56462c3d641fccfc4b2d8c6454f404ca398aec944745ea5ed32114dee8968f" Namespace="calico-system" Pod="csi-node-driver-hrmvr" WorkloadEndpoint="172--233--222--141-k8s-csi--node--driver--hrmvr-eth0" May 17 00:22:20.750574 containerd[1467]: 2025-05-17 00:22:20.708 [INFO][4897] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8f102dc718d ContainerID="ea56462c3d641fccfc4b2d8c6454f404ca398aec944745ea5ed32114dee8968f" Namespace="calico-system" Pod="csi-node-driver-hrmvr" WorkloadEndpoint="172--233--222--141-k8s-csi--node--driver--hrmvr-eth0" May 17 00:22:20.750574 containerd[1467]: 2025-05-17 00:22:20.719 [INFO][4897] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ea56462c3d641fccfc4b2d8c6454f404ca398aec944745ea5ed32114dee8968f" Namespace="calico-system" Pod="csi-node-driver-hrmvr" WorkloadEndpoint="172--233--222--141-k8s-csi--node--driver--hrmvr-eth0" May 17 00:22:20.750574 containerd[1467]: 2025-05-17 00:22:20.721 [INFO][4897] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ea56462c3d641fccfc4b2d8c6454f404ca398aec944745ea5ed32114dee8968f" Namespace="calico-system" Pod="csi-node-driver-hrmvr" WorkloadEndpoint="172--233--222--141-k8s-csi--node--driver--hrmvr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--141-k8s-csi--node--driver--hrmvr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c739a616-a481-41f3-a04d-de803459e701", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-141", ContainerID:"ea56462c3d641fccfc4b2d8c6454f404ca398aec944745ea5ed32114dee8968f", Pod:"csi-node-driver-hrmvr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.24.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8f102dc718d", MAC:"8a:9c:ba:6c:ef:19", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:20.750574 containerd[1467]: 2025-05-17 00:22:20.741 [INFO][4897] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ea56462c3d641fccfc4b2d8c6454f404ca398aec944745ea5ed32114dee8968f" Namespace="calico-system" Pod="csi-node-driver-hrmvr" WorkloadEndpoint="172--233--222--141-k8s-csi--node--driver--hrmvr-eth0" May 17 00:22:20.804467 containerd[1467]: time="2025-05-17T00:22:20.803916073Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:22:20.804467 containerd[1467]: time="2025-05-17T00:22:20.803997744Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:22:20.804467 containerd[1467]: time="2025-05-17T00:22:20.804010396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:20.804467 containerd[1467]: time="2025-05-17T00:22:20.804104459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:20.851826 systemd[1]: Started cri-containerd-ea56462c3d641fccfc4b2d8c6454f404ca398aec944745ea5ed32114dee8968f.scope - libcontainer container ea56462c3d641fccfc4b2d8c6454f404ca398aec944745ea5ed32114dee8968f. May 17 00:22:20.930399 containerd[1467]: time="2025-05-17T00:22:20.930341095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hrmvr,Uid:c739a616-a481-41f3-a04d-de803459e701,Namespace:calico-system,Attempt:1,} returns sandbox id \"ea56462c3d641fccfc4b2d8c6454f404ca398aec944745ea5ed32114dee8968f\"" May 17 00:22:20.947579 systemd-networkd[1392]: vxlan.calico: Link UP May 17 00:22:20.947586 systemd-networkd[1392]: vxlan.calico: Gained carrier May 17 00:22:21.128065 systemd-networkd[1392]: caliedae96f96ad: Gained IPv6LL May 17 00:22:21.190958 systemd-networkd[1392]: cali057ef22c42e: Gained IPv6LL May 17 00:22:21.382970 containerd[1467]: time="2025-05-17T00:22:21.382427524Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:21.384231 containerd[1467]: time="2025-05-17T00:22:21.383998007Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.0: active requests=0, bytes read=51178512" May 17 00:22:21.384887 containerd[1467]: time="2025-05-17T00:22:21.384744887Z" level=info msg="ImageCreate event name:\"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:21.388684 containerd[1467]: time="2025-05-17T00:22:21.388625571Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:21.391505 containerd[1467]: time="2025-05-17T00:22:21.391469945Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" with image id \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\", size \"52671183\" in 2.691891555s" May 17 00:22:21.391505 containerd[1467]: time="2025-05-17T00:22:21.391501689Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\"" May 17 00:22:21.392880 containerd[1467]: time="2025-05-17T00:22:21.392719304Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:22:21.431309 containerd[1467]: time="2025-05-17T00:22:21.427563829Z" level=info msg="CreateContainer within sandbox \"6ec47de3d6a78914ca04086098543442b993c48675e3c14ce657165c4d016a8f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 17 00:22:21.467744 containerd[1467]: time="2025-05-17T00:22:21.467703970Z" level=info msg="CreateContainer within sandbox \"6ec47de3d6a78914ca04086098543442b993c48675e3c14ce657165c4d016a8f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"2b6c8dbe9e95f2749d752a79a296c6442833421db7670b980d78873797543623\"" May 17 00:22:21.468971 containerd[1467]: time="2025-05-17T00:22:21.468817771Z" level=info msg="StartContainer for \"2b6c8dbe9e95f2749d752a79a296c6442833421db7670b980d78873797543623\"" May 17 00:22:21.513096 containerd[1467]: time="2025-05-17T00:22:21.513048604Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:22:21.521344 containerd[1467]: time="2025-05-17T00:22:21.521286296Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:22:21.521344 containerd[1467]: time="2025-05-17T00:22:21.521392950Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:22:21.524505 kubelet[2505]: E0517 00:22:21.523376 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:22:21.524505 kubelet[2505]: E0517 00:22:21.523490 2505 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:22:21.524505 kubelet[2505]: E0517 00:22:21.524406 2505 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-frntd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-rx28r_calico-system(a8d2447a-ad8d-4842-8426-24362dceb355): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:22:21.524912 containerd[1467]: time="2025-05-17T00:22:21.523772171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 17 00:22:21.525351 systemd[1]: Started cri-containerd-2b6c8dbe9e95f2749d752a79a296c6442833421db7670b980d78873797543623.scope - libcontainer container 2b6c8dbe9e95f2749d752a79a296c6442833421db7670b980d78873797543623. May 17 00:22:21.528066 kubelet[2505]: E0517 00:22:21.525511 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-rx28r" podUID="a8d2447a-ad8d-4842-8426-24362dceb355" May 17 00:22:21.543461 kubelet[2505]: E0517 00:22:21.541857 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-rx28r" podUID="a8d2447a-ad8d-4842-8426-24362dceb355" May 17 00:22:21.566723 kubelet[2505]: I0517 00:22:21.566305 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7d8b46c577-g5mnw" podStartSLOduration=24.566289274 podStartE2EDuration="24.566289274s" podCreationTimestamp="2025-05-17 00:21:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:22:21.552646791 +0000 UTC m=+40.372552790" watchObservedRunningTime="2025-05-17 00:22:21.566289274 +0000 UTC m=+40.386195273" May 17 00:22:21.678832 containerd[1467]: time="2025-05-17T00:22:21.678673951Z" level=info msg="StartContainer for \"2b6c8dbe9e95f2749d752a79a296c6442833421db7670b980d78873797543623\" returns successfully" May 17 00:22:22.534862 systemd-networkd[1392]: vxlan.calico: Gained IPv6LL May 17 00:22:22.546973 kubelet[2505]: I0517 00:22:22.545845 2505 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:22:22.556024 kubelet[2505]: I0517 00:22:22.555686 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-58c8cb96d-rqnqs" podStartSLOduration=20.862879315 podStartE2EDuration="23.555652025s" podCreationTimestamp="2025-05-17 00:21:59 +0000 UTC" firstStartedPulling="2025-05-17 00:22:18.699237049 +0000 UTC m=+37.519143048" lastFinishedPulling="2025-05-17 00:22:21.392009768 +0000 UTC m=+40.211915758" observedRunningTime="2025-05-17 00:22:22.555330423 +0000 UTC m=+41.375236412" watchObservedRunningTime="2025-05-17 00:22:22.555652025 +0000 UTC m=+41.375558024" May 17 00:22:22.598851 systemd-networkd[1392]: cali8f102dc718d: Gained IPv6LL May 17 00:22:23.069924 containerd[1467]: time="2025-05-17T00:22:23.069836321Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:23.070962 containerd[1467]: time="2025-05-17T00:22:23.070759869Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8758390" May 17 00:22:23.071700 containerd[1467]: time="2025-05-17T00:22:23.071451978Z" level=info msg="ImageCreate event name:\"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:23.074229 containerd[1467]: time="2025-05-17T00:22:23.073521213Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:23.074229 containerd[1467]: time="2025-05-17T00:22:23.074084235Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"10251093\" in 1.550290071s" May 17 00:22:23.074229 containerd[1467]: time="2025-05-17T00:22:23.074112919Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\"" May 17 00:22:23.078693 containerd[1467]: time="2025-05-17T00:22:23.078644589Z" level=info msg="CreateContainer within sandbox \"ea56462c3d641fccfc4b2d8c6454f404ca398aec944745ea5ed32114dee8968f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 17 00:22:23.101342 containerd[1467]: time="2025-05-17T00:22:23.100636857Z" level=info msg="CreateContainer within sandbox \"ea56462c3d641fccfc4b2d8c6454f404ca398aec944745ea5ed32114dee8968f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"401dca3357dffdeb50662b0f4d85441ac91d601d14a8d791326f84f94bc9beb1\"" May 17 00:22:23.101165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount691900920.mount: Deactivated successfully. May 17 00:22:23.105550 containerd[1467]: time="2025-05-17T00:22:23.104844186Z" level=info msg="StartContainer for \"401dca3357dffdeb50662b0f4d85441ac91d601d14a8d791326f84f94bc9beb1\"" May 17 00:22:23.142814 systemd[1]: Started cri-containerd-401dca3357dffdeb50662b0f4d85441ac91d601d14a8d791326f84f94bc9beb1.scope - libcontainer container 401dca3357dffdeb50662b0f4d85441ac91d601d14a8d791326f84f94bc9beb1. May 17 00:22:23.177440 containerd[1467]: time="2025-05-17T00:22:23.177393044Z" level=info msg="StartContainer for \"401dca3357dffdeb50662b0f4d85441ac91d601d14a8d791326f84f94bc9beb1\" returns successfully" May 17 00:22:23.179592 containerd[1467]: time="2025-05-17T00:22:23.179548450Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 17 00:22:23.549631 kubelet[2505]: I0517 00:22:23.549589 2505 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:22:24.587514 containerd[1467]: time="2025-05-17T00:22:24.587285280Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:24.589323 containerd[1467]: time="2025-05-17T00:22:24.588750133Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=14705639" May 17 00:22:24.589498 containerd[1467]: time="2025-05-17T00:22:24.589471083Z" level=info msg="ImageCreate event name:\"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:24.591909 containerd[1467]: time="2025-05-17T00:22:24.591694741Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:24.592951 containerd[1467]: time="2025-05-17T00:22:24.592916894Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"16198294\" in 1.413303386s" May 17 00:22:24.592951 containerd[1467]: time="2025-05-17T00:22:24.592952188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\"" May 17 00:22:24.598323 containerd[1467]: time="2025-05-17T00:22:24.598279634Z" level=info msg="CreateContainer within sandbox \"ea56462c3d641fccfc4b2d8c6454f404ca398aec944745ea5ed32114dee8968f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 17 00:22:24.611793 containerd[1467]: time="2025-05-17T00:22:24.611555014Z" level=info msg="CreateContainer within sandbox \"ea56462c3d641fccfc4b2d8c6454f404ca398aec944745ea5ed32114dee8968f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"527354dcd42ec0a9a9074a9dee50983617ee276d5e6541eac69f48bf18ab522f\"" May 17 00:22:24.615249 containerd[1467]: time="2025-05-17T00:22:24.613772641Z" level=info msg="StartContainer for \"527354dcd42ec0a9a9074a9dee50983617ee276d5e6541eac69f48bf18ab522f\"" May 17 00:22:24.618857 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1356786754.mount: Deactivated successfully. May 17 00:22:24.654786 systemd[1]: Started cri-containerd-527354dcd42ec0a9a9074a9dee50983617ee276d5e6541eac69f48bf18ab522f.scope - libcontainer container 527354dcd42ec0a9a9074a9dee50983617ee276d5e6541eac69f48bf18ab522f. May 17 00:22:24.681411 containerd[1467]: time="2025-05-17T00:22:24.681283161Z" level=info msg="StartContainer for \"527354dcd42ec0a9a9074a9dee50983617ee276d5e6541eac69f48bf18ab522f\" returns successfully" May 17 00:22:25.276029 containerd[1467]: time="2025-05-17T00:22:25.275820704Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:22:25.355339 kubelet[2505]: I0517 00:22:25.355224 2505 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 17 00:22:25.356770 kubelet[2505]: I0517 00:22:25.356723 2505 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 17 00:22:25.380004 containerd[1467]: time="2025-05-17T00:22:25.379950966Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:22:25.381172 containerd[1467]: time="2025-05-17T00:22:25.381127050Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:22:25.381229 containerd[1467]: time="2025-05-17T00:22:25.381126010Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:22:25.381429 kubelet[2505]: E0517 00:22:25.381397 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:22:25.381537 kubelet[2505]: E0517 00:22:25.381443 2505 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:22:25.381652 kubelet[2505]: E0517 00:22:25.381581 2505 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:cd29bf53547f4577a2cbbab64c8bad8c,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4nscn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79c6b7464b-t786w_calico-system(fbe987ff-c3c8-4769-8d91-b50b803b038b): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:22:25.383541 containerd[1467]: time="2025-05-17T00:22:25.383521252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:22:25.480590 containerd[1467]: time="2025-05-17T00:22:25.480525904Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:22:25.481283 containerd[1467]: time="2025-05-17T00:22:25.481253504Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:22:25.481392 containerd[1467]: time="2025-05-17T00:22:25.481352516Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:22:25.481573 kubelet[2505]: E0517 00:22:25.481521 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:22:25.482045 kubelet[2505]: E0517 00:22:25.481583 2505 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:22:25.482045 kubelet[2505]: E0517 00:22:25.481755 2505 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4nscn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79c6b7464b-t786w_calico-system(fbe987ff-c3c8-4769-8d91-b50b803b038b): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:22:25.483333 kubelet[2505]: E0517 00:22:25.483267 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-79c6b7464b-t786w" podUID="fbe987ff-c3c8-4769-8d91-b50b803b038b" May 17 00:22:25.567857 kubelet[2505]: I0517 00:22:25.567193 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-hrmvr" podStartSLOduration=22.910630583 podStartE2EDuration="26.567178094s" podCreationTimestamp="2025-05-17 00:21:59 +0000 UTC" firstStartedPulling="2025-05-17 00:22:20.937826055 +0000 UTC m=+39.757732044" lastFinishedPulling="2025-05-17 00:22:24.594373556 +0000 UTC m=+43.414279555" observedRunningTime="2025-05-17 00:22:25.566790506 +0000 UTC m=+44.386696505" watchObservedRunningTime="2025-05-17 00:22:25.567178094 +0000 UTC m=+44.387084093" May 17 00:22:32.262479 kubelet[2505]: I0517 00:22:32.262426 2505 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:22:36.275980 containerd[1467]: time="2025-05-17T00:22:36.275661301Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:22:36.383468 containerd[1467]: time="2025-05-17T00:22:36.383405934Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:22:36.384368 containerd[1467]: time="2025-05-17T00:22:36.384337378Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:22:36.384456 containerd[1467]: time="2025-05-17T00:22:36.384404724Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:22:36.384588 kubelet[2505]: E0517 00:22:36.384533 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:22:36.385165 kubelet[2505]: E0517 00:22:36.384595 2505 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:22:36.385220 kubelet[2505]: E0517 00:22:36.384799 2505 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-frntd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-rx28r_calico-system(a8d2447a-ad8d-4842-8426-24362dceb355): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:22:36.387067 kubelet[2505]: E0517 00:22:36.387009 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-rx28r" podUID="a8d2447a-ad8d-4842-8426-24362dceb355" May 17 00:22:39.279252 kubelet[2505]: E0517 00:22:39.279127 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-79c6b7464b-t786w" podUID="fbe987ff-c3c8-4769-8d91-b50b803b038b" May 17 00:22:41.266461 containerd[1467]: time="2025-05-17T00:22:41.266399621Z" level=info msg="StopPodSandbox for \"1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f\"" May 17 00:22:41.354145 containerd[1467]: 2025-05-17 00:22:41.316 [WARNING][5258] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--141-k8s-csi--node--driver--hrmvr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c739a616-a481-41f3-a04d-de803459e701", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-141", ContainerID:"ea56462c3d641fccfc4b2d8c6454f404ca398aec944745ea5ed32114dee8968f", Pod:"csi-node-driver-hrmvr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.24.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8f102dc718d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:41.354145 containerd[1467]: 2025-05-17 00:22:41.316 [INFO][5258] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f" May 17 00:22:41.354145 containerd[1467]: 2025-05-17 00:22:41.316 [INFO][5258] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f" iface="eth0" netns="" May 17 00:22:41.354145 containerd[1467]: 2025-05-17 00:22:41.316 [INFO][5258] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f" May 17 00:22:41.354145 containerd[1467]: 2025-05-17 00:22:41.316 [INFO][5258] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f" May 17 00:22:41.354145 containerd[1467]: 2025-05-17 00:22:41.342 [INFO][5266] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f" HandleID="k8s-pod-network.1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f" Workload="172--233--222--141-k8s-csi--node--driver--hrmvr-eth0" May 17 00:22:41.354145 containerd[1467]: 2025-05-17 00:22:41.342 [INFO][5266] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:41.354145 containerd[1467]: 2025-05-17 00:22:41.342 [INFO][5266] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:41.354145 containerd[1467]: 2025-05-17 00:22:41.348 [WARNING][5266] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f" HandleID="k8s-pod-network.1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f" Workload="172--233--222--141-k8s-csi--node--driver--hrmvr-eth0" May 17 00:22:41.354145 containerd[1467]: 2025-05-17 00:22:41.348 [INFO][5266] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f" HandleID="k8s-pod-network.1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f" Workload="172--233--222--141-k8s-csi--node--driver--hrmvr-eth0" May 17 00:22:41.354145 containerd[1467]: 2025-05-17 00:22:41.349 [INFO][5266] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:41.354145 containerd[1467]: 2025-05-17 00:22:41.351 [INFO][5258] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f" May 17 00:22:41.354567 containerd[1467]: time="2025-05-17T00:22:41.354174219Z" level=info msg="TearDown network for sandbox \"1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f\" successfully" May 17 00:22:41.354567 containerd[1467]: time="2025-05-17T00:22:41.354200272Z" level=info msg="StopPodSandbox for \"1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f\" returns successfully" May 17 00:22:41.354955 containerd[1467]: time="2025-05-17T00:22:41.354932900Z" level=info msg="RemovePodSandbox for \"1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f\"" May 17 00:22:41.355026 containerd[1467]: time="2025-05-17T00:22:41.354960843Z" level=info msg="Forcibly stopping sandbox \"1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f\"" May 17 00:22:41.433594 containerd[1467]: 2025-05-17 00:22:41.390 [WARNING][5280] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--141-k8s-csi--node--driver--hrmvr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c739a616-a481-41f3-a04d-de803459e701", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-141", ContainerID:"ea56462c3d641fccfc4b2d8c6454f404ca398aec944745ea5ed32114dee8968f", Pod:"csi-node-driver-hrmvr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.24.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8f102dc718d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:41.433594 containerd[1467]: 2025-05-17 00:22:41.390 [INFO][5280] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f" May 17 00:22:41.433594 containerd[1467]: 2025-05-17 00:22:41.390 [INFO][5280] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f" iface="eth0" netns="" May 17 00:22:41.433594 containerd[1467]: 2025-05-17 00:22:41.390 [INFO][5280] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f" May 17 00:22:41.433594 containerd[1467]: 2025-05-17 00:22:41.390 [INFO][5280] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f" May 17 00:22:41.433594 containerd[1467]: 2025-05-17 00:22:41.416 [INFO][5287] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f" HandleID="k8s-pod-network.1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f" Workload="172--233--222--141-k8s-csi--node--driver--hrmvr-eth0" May 17 00:22:41.433594 containerd[1467]: 2025-05-17 00:22:41.416 [INFO][5287] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:41.433594 containerd[1467]: 2025-05-17 00:22:41.416 [INFO][5287] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:41.433594 containerd[1467]: 2025-05-17 00:22:41.425 [WARNING][5287] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f" HandleID="k8s-pod-network.1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f" Workload="172--233--222--141-k8s-csi--node--driver--hrmvr-eth0" May 17 00:22:41.433594 containerd[1467]: 2025-05-17 00:22:41.425 [INFO][5287] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f" HandleID="k8s-pod-network.1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f" Workload="172--233--222--141-k8s-csi--node--driver--hrmvr-eth0" May 17 00:22:41.433594 containerd[1467]: 2025-05-17 00:22:41.426 [INFO][5287] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:41.433594 containerd[1467]: 2025-05-17 00:22:41.429 [INFO][5280] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f" May 17 00:22:41.434197 containerd[1467]: time="2025-05-17T00:22:41.433646739Z" level=info msg="TearDown network for sandbox \"1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f\" successfully" May 17 00:22:41.438766 containerd[1467]: time="2025-05-17T00:22:41.438704063Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:22:41.438925 containerd[1467]: time="2025-05-17T00:22:41.438789531Z" level=info msg="RemovePodSandbox \"1d03f9765c578dfcdb00b18655333bfd8f2c279ecd962701f008a6726858782f\" returns successfully" May 17 00:22:41.440046 containerd[1467]: time="2025-05-17T00:22:41.439647021Z" level=info msg="StopPodSandbox for \"4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9\"" May 17 00:22:41.505983 containerd[1467]: 2025-05-17 00:22:41.473 [WARNING][5302] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--141-k8s-calico--kube--controllers--58c8cb96d--rqnqs-eth0", GenerateName:"calico-kube-controllers-58c8cb96d-", Namespace:"calico-system", SelfLink:"", UID:"de4de8b9-3fd9-48eb-b6c1-3ea87c183557", ResourceVersion:"1095", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58c8cb96d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-141", ContainerID:"6ec47de3d6a78914ca04086098543442b993c48675e3c14ce657165c4d016a8f", Pod:"calico-kube-controllers-58c8cb96d-rqnqs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.24.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid1cdd338c99", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:41.505983 containerd[1467]: 2025-05-17 00:22:41.473 [INFO][5302] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9" May 17 00:22:41.505983 containerd[1467]: 2025-05-17 00:22:41.473 [INFO][5302] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9" iface="eth0" netns="" May 17 00:22:41.505983 containerd[1467]: 2025-05-17 00:22:41.473 [INFO][5302] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9" May 17 00:22:41.505983 containerd[1467]: 2025-05-17 00:22:41.473 [INFO][5302] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9" May 17 00:22:41.505983 containerd[1467]: 2025-05-17 00:22:41.494 [INFO][5309] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9" HandleID="k8s-pod-network.4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9" Workload="172--233--222--141-k8s-calico--kube--controllers--58c8cb96d--rqnqs-eth0" May 17 00:22:41.505983 containerd[1467]: 2025-05-17 00:22:41.495 [INFO][5309] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:41.505983 containerd[1467]: 2025-05-17 00:22:41.495 [INFO][5309] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:41.505983 containerd[1467]: 2025-05-17 00:22:41.500 [WARNING][5309] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9" HandleID="k8s-pod-network.4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9" Workload="172--233--222--141-k8s-calico--kube--controllers--58c8cb96d--rqnqs-eth0" May 17 00:22:41.505983 containerd[1467]: 2025-05-17 00:22:41.500 [INFO][5309] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9" HandleID="k8s-pod-network.4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9" Workload="172--233--222--141-k8s-calico--kube--controllers--58c8cb96d--rqnqs-eth0" May 17 00:22:41.505983 containerd[1467]: 2025-05-17 00:22:41.502 [INFO][5309] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:41.505983 containerd[1467]: 2025-05-17 00:22:41.503 [INFO][5302] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9" May 17 00:22:41.506410 containerd[1467]: time="2025-05-17T00:22:41.506132094Z" level=info msg="TearDown network for sandbox \"4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9\" successfully" May 17 00:22:41.506410 containerd[1467]: time="2025-05-17T00:22:41.506161547Z" level=info msg="StopPodSandbox for \"4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9\" returns successfully" May 17 00:22:41.507042 containerd[1467]: time="2025-05-17T00:22:41.507009456Z" level=info msg="RemovePodSandbox for \"4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9\"" May 17 00:22:41.507083 containerd[1467]: time="2025-05-17T00:22:41.507048840Z" level=info msg="Forcibly stopping sandbox \"4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9\"" May 17 00:22:41.598888 containerd[1467]: 2025-05-17 00:22:41.539 [WARNING][5324] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--141-k8s-calico--kube--controllers--58c8cb96d--rqnqs-eth0", GenerateName:"calico-kube-controllers-58c8cb96d-", Namespace:"calico-system", SelfLink:"", UID:"de4de8b9-3fd9-48eb-b6c1-3ea87c183557", ResourceVersion:"1095", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58c8cb96d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-141", ContainerID:"6ec47de3d6a78914ca04086098543442b993c48675e3c14ce657165c4d016a8f", Pod:"calico-kube-controllers-58c8cb96d-rqnqs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.24.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid1cdd338c99", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:41.598888 containerd[1467]: 2025-05-17 00:22:41.539 [INFO][5324] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9" May 17 00:22:41.598888 containerd[1467]: 2025-05-17 00:22:41.539 [INFO][5324] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9" iface="eth0" netns="" May 17 00:22:41.598888 containerd[1467]: 2025-05-17 00:22:41.539 [INFO][5324] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9" May 17 00:22:41.598888 containerd[1467]: 2025-05-17 00:22:41.539 [INFO][5324] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9" May 17 00:22:41.598888 containerd[1467]: 2025-05-17 00:22:41.570 [INFO][5332] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9" HandleID="k8s-pod-network.4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9" Workload="172--233--222--141-k8s-calico--kube--controllers--58c8cb96d--rqnqs-eth0" May 17 00:22:41.598888 containerd[1467]: 2025-05-17 00:22:41.572 [INFO][5332] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:41.598888 containerd[1467]: 2025-05-17 00:22:41.572 [INFO][5332] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:41.598888 containerd[1467]: 2025-05-17 00:22:41.580 [WARNING][5332] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9" HandleID="k8s-pod-network.4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9" Workload="172--233--222--141-k8s-calico--kube--controllers--58c8cb96d--rqnqs-eth0" May 17 00:22:41.598888 containerd[1467]: 2025-05-17 00:22:41.581 [INFO][5332] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9" HandleID="k8s-pod-network.4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9" Workload="172--233--222--141-k8s-calico--kube--controllers--58c8cb96d--rqnqs-eth0" May 17 00:22:41.598888 containerd[1467]: 2025-05-17 00:22:41.583 [INFO][5332] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:41.598888 containerd[1467]: 2025-05-17 00:22:41.587 [INFO][5324] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9" May 17 00:22:41.600731 containerd[1467]: time="2025-05-17T00:22:41.599734328Z" level=info msg="TearDown network for sandbox \"4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9\" successfully" May 17 00:22:41.605798 containerd[1467]: time="2025-05-17T00:22:41.605756043Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:22:41.605798 containerd[1467]: time="2025-05-17T00:22:41.605816248Z" level=info msg="RemovePodSandbox \"4cefa83187a5e3c2e9f7e7d76f3a106409e6ac448397648cdd659c4b36e94fc9\" returns successfully" May 17 00:22:41.606572 containerd[1467]: time="2025-05-17T00:22:41.606530096Z" level=info msg="StopPodSandbox for \"639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d\"" May 17 00:22:41.674447 containerd[1467]: 2025-05-17 00:22:41.642 [WARNING][5347] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--141-k8s-calico--apiserver--7d8b46c577--4dr29-eth0", GenerateName:"calico-apiserver-7d8b46c577-", Namespace:"calico-apiserver", SelfLink:"", UID:"0a6cc853-64b3-4a6d-8418-b38799cbf9cb", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d8b46c577", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-141", ContainerID:"3fdb1d276a070a3a9c31219fc021f29cfa57976947fe9aaf279be40244cb818f", Pod:"calico-apiserver-7d8b46c577-4dr29", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2b7134b9925", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:41.674447 containerd[1467]: 2025-05-17 00:22:41.642 [INFO][5347] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d" May 17 00:22:41.674447 containerd[1467]: 2025-05-17 00:22:41.642 [INFO][5347] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d" iface="eth0" netns="" May 17 00:22:41.674447 containerd[1467]: 2025-05-17 00:22:41.642 [INFO][5347] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d" May 17 00:22:41.674447 containerd[1467]: 2025-05-17 00:22:41.642 [INFO][5347] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d" May 17 00:22:41.674447 containerd[1467]: 2025-05-17 00:22:41.663 [INFO][5355] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d" HandleID="k8s-pod-network.639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d" Workload="172--233--222--141-k8s-calico--apiserver--7d8b46c577--4dr29-eth0" May 17 00:22:41.674447 containerd[1467]: 2025-05-17 00:22:41.663 [INFO][5355] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:41.674447 containerd[1467]: 2025-05-17 00:22:41.663 [INFO][5355] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:41.674447 containerd[1467]: 2025-05-17 00:22:41.668 [WARNING][5355] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d" HandleID="k8s-pod-network.639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d" Workload="172--233--222--141-k8s-calico--apiserver--7d8b46c577--4dr29-eth0" May 17 00:22:41.674447 containerd[1467]: 2025-05-17 00:22:41.668 [INFO][5355] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d" HandleID="k8s-pod-network.639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d" Workload="172--233--222--141-k8s-calico--apiserver--7d8b46c577--4dr29-eth0" May 17 00:22:41.674447 containerd[1467]: 2025-05-17 00:22:41.669 [INFO][5355] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:41.674447 containerd[1467]: 2025-05-17 00:22:41.671 [INFO][5347] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d" May 17 00:22:41.675038 containerd[1467]: time="2025-05-17T00:22:41.674467855Z" level=info msg="TearDown network for sandbox \"639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d\" successfully" May 17 00:22:41.675038 containerd[1467]: time="2025-05-17T00:22:41.674509688Z" level=info msg="StopPodSandbox for \"639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d\" returns successfully" May 17 00:22:41.675474 containerd[1467]: time="2025-05-17T00:22:41.675438055Z" level=info msg="RemovePodSandbox for \"639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d\"" May 17 00:22:41.675518 containerd[1467]: time="2025-05-17T00:22:41.675478369Z" level=info msg="Forcibly stopping sandbox \"639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d\"" May 17 00:22:41.747795 containerd[1467]: 2025-05-17 00:22:41.711 [WARNING][5370] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--141-k8s-calico--apiserver--7d8b46c577--4dr29-eth0", GenerateName:"calico-apiserver-7d8b46c577-", Namespace:"calico-apiserver", SelfLink:"", UID:"0a6cc853-64b3-4a6d-8418-b38799cbf9cb", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d8b46c577", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-141", ContainerID:"3fdb1d276a070a3a9c31219fc021f29cfa57976947fe9aaf279be40244cb818f", Pod:"calico-apiserver-7d8b46c577-4dr29", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2b7134b9925", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:41.747795 containerd[1467]: 2025-05-17 00:22:41.712 [INFO][5370] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d" May 17 00:22:41.747795 containerd[1467]: 2025-05-17 00:22:41.712 [INFO][5370] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d" iface="eth0" netns="" May 17 00:22:41.747795 containerd[1467]: 2025-05-17 00:22:41.712 [INFO][5370] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d" May 17 00:22:41.747795 containerd[1467]: 2025-05-17 00:22:41.712 [INFO][5370] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d" May 17 00:22:41.747795 containerd[1467]: 2025-05-17 00:22:41.735 [INFO][5377] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d" HandleID="k8s-pod-network.639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d" Workload="172--233--222--141-k8s-calico--apiserver--7d8b46c577--4dr29-eth0" May 17 00:22:41.747795 containerd[1467]: 2025-05-17 00:22:41.735 [INFO][5377] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:41.747795 containerd[1467]: 2025-05-17 00:22:41.736 [INFO][5377] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:41.747795 containerd[1467]: 2025-05-17 00:22:41.741 [WARNING][5377] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d" HandleID="k8s-pod-network.639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d" Workload="172--233--222--141-k8s-calico--apiserver--7d8b46c577--4dr29-eth0" May 17 00:22:41.747795 containerd[1467]: 2025-05-17 00:22:41.741 [INFO][5377] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d" HandleID="k8s-pod-network.639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d" Workload="172--233--222--141-k8s-calico--apiserver--7d8b46c577--4dr29-eth0" May 17 00:22:41.747795 containerd[1467]: 2025-05-17 00:22:41.742 [INFO][5377] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:41.747795 containerd[1467]: 2025-05-17 00:22:41.745 [INFO][5370] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d" May 17 00:22:41.748119 containerd[1467]: time="2025-05-17T00:22:41.747813249Z" level=info msg="TearDown network for sandbox \"639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d\" successfully" May 17 00:22:41.751567 containerd[1467]: time="2025-05-17T00:22:41.751539909Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:22:41.751708 containerd[1467]: time="2025-05-17T00:22:41.751594464Z" level=info msg="RemovePodSandbox \"639e8f36dc212a40de3ef273570b70281ae940ed088e387807681829f4b64d6d\" returns successfully" May 17 00:22:41.752363 containerd[1467]: time="2025-05-17T00:22:41.752286819Z" level=info msg="StopPodSandbox for \"e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75\"" May 17 00:22:41.822309 containerd[1467]: 2025-05-17 00:22:41.784 [WARNING][5391] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--141-k8s-coredns--674b8bbfcf--hrlj7-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6f83297f-0f6d-448a-89e2-0744aceeab4a", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-141", ContainerID:"1c26a5618d5b2c1e8ed9861b3dffae1ede0e5c3356c40c969a44b485a606e9d9", Pod:"coredns-674b8bbfcf-hrlj7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7f4d210b56e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:41.822309 containerd[1467]: 2025-05-17 00:22:41.784 [INFO][5391] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75" May 17 00:22:41.822309 containerd[1467]: 2025-05-17 00:22:41.784 [INFO][5391] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75" iface="eth0" netns="" May 17 00:22:41.822309 containerd[1467]: 2025-05-17 00:22:41.784 [INFO][5391] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75" May 17 00:22:41.822309 containerd[1467]: 2025-05-17 00:22:41.784 [INFO][5391] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75" May 17 00:22:41.822309 containerd[1467]: 2025-05-17 00:22:41.809 [INFO][5399] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75" HandleID="k8s-pod-network.e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75" Workload="172--233--222--141-k8s-coredns--674b8bbfcf--hrlj7-eth0" May 17 00:22:41.822309 containerd[1467]: 2025-05-17 00:22:41.810 [INFO][5399] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:41.822309 containerd[1467]: 2025-05-17 00:22:41.810 [INFO][5399] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:41.822309 containerd[1467]: 2025-05-17 00:22:41.815 [WARNING][5399] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75" HandleID="k8s-pod-network.e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75" Workload="172--233--222--141-k8s-coredns--674b8bbfcf--hrlj7-eth0" May 17 00:22:41.822309 containerd[1467]: 2025-05-17 00:22:41.815 [INFO][5399] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75" HandleID="k8s-pod-network.e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75" Workload="172--233--222--141-k8s-coredns--674b8bbfcf--hrlj7-eth0" May 17 00:22:41.822309 containerd[1467]: 2025-05-17 00:22:41.817 [INFO][5399] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:41.822309 containerd[1467]: 2025-05-17 00:22:41.819 [INFO][5391] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75" May 17 00:22:41.822869 containerd[1467]: time="2025-05-17T00:22:41.822352608Z" level=info msg="TearDown network for sandbox \"e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75\" successfully" May 17 00:22:41.822869 containerd[1467]: time="2025-05-17T00:22:41.822392861Z" level=info msg="StopPodSandbox for \"e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75\" returns successfully" May 17 00:22:41.823062 containerd[1467]: time="2025-05-17T00:22:41.823016820Z" level=info msg="RemovePodSandbox for \"e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75\"" May 17 00:22:41.823062 containerd[1467]: time="2025-05-17T00:22:41.823044332Z" level=info msg="Forcibly stopping sandbox \"e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75\"" May 17 00:22:41.907787 containerd[1467]: 2025-05-17 00:22:41.864 [WARNING][5413] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--141-k8s-coredns--674b8bbfcf--hrlj7-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6f83297f-0f6d-448a-89e2-0744aceeab4a", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-141", ContainerID:"1c26a5618d5b2c1e8ed9861b3dffae1ede0e5c3356c40c969a44b485a606e9d9", Pod:"coredns-674b8bbfcf-hrlj7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7f4d210b56e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:41.907787 containerd[1467]: 2025-05-17 00:22:41.865 [INFO][5413] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75" May 17 00:22:41.907787 containerd[1467]: 2025-05-17 00:22:41.865 [INFO][5413] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75" iface="eth0" netns="" May 17 00:22:41.907787 containerd[1467]: 2025-05-17 00:22:41.865 [INFO][5413] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75" May 17 00:22:41.907787 containerd[1467]: 2025-05-17 00:22:41.865 [INFO][5413] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75" May 17 00:22:41.907787 containerd[1467]: 2025-05-17 00:22:41.891 [INFO][5420] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75" HandleID="k8s-pod-network.e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75" Workload="172--233--222--141-k8s-coredns--674b8bbfcf--hrlj7-eth0" May 17 00:22:41.907787 containerd[1467]: 2025-05-17 00:22:41.891 [INFO][5420] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:41.907787 containerd[1467]: 2025-05-17 00:22:41.891 [INFO][5420] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:41.907787 containerd[1467]: 2025-05-17 00:22:41.897 [WARNING][5420] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75" HandleID="k8s-pod-network.e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75" Workload="172--233--222--141-k8s-coredns--674b8bbfcf--hrlj7-eth0" May 17 00:22:41.907787 containerd[1467]: 2025-05-17 00:22:41.897 [INFO][5420] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75" HandleID="k8s-pod-network.e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75" Workload="172--233--222--141-k8s-coredns--674b8bbfcf--hrlj7-eth0" May 17 00:22:41.907787 containerd[1467]: 2025-05-17 00:22:41.898 [INFO][5420] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:41.907787 containerd[1467]: 2025-05-17 00:22:41.902 [INFO][5413] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75" May 17 00:22:41.907787 containerd[1467]: time="2025-05-17T00:22:41.907770624Z" level=info msg="TearDown network for sandbox \"e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75\" successfully" May 17 00:22:41.913869 containerd[1467]: time="2025-05-17T00:22:41.913582379Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:22:41.913869 containerd[1467]: time="2025-05-17T00:22:41.913687029Z" level=info msg="RemovePodSandbox \"e1b0807666706de63c138fc241402f42078a38c6558788317ea87af44c5c6b75\" returns successfully" May 17 00:22:41.915827 containerd[1467]: time="2025-05-17T00:22:41.915788386Z" level=info msg="StopPodSandbox for \"43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11\"" May 17 00:22:42.005058 containerd[1467]: 2025-05-17 00:22:41.954 [WARNING][5440] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--141-k8s-calico--apiserver--7d8b46c577--g5mnw-eth0", GenerateName:"calico-apiserver-7d8b46c577-", Namespace:"calico-apiserver", SelfLink:"", UID:"84bea557-3a73-4c30-b7d9-60dca7b8e6f7", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d8b46c577", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-141", ContainerID:"485b7908bb7bf7ecb8ae39c834bb846d4bf88a33d3102f31d02e9a41b1dcc1e6", Pod:"calico-apiserver-7d8b46c577-g5mnw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliedae96f96ad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:42.005058 containerd[1467]: 2025-05-17 00:22:41.954 [INFO][5440] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11" May 17 00:22:42.005058 containerd[1467]: 2025-05-17 00:22:41.954 [INFO][5440] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11" iface="eth0" netns="" May 17 00:22:42.005058 containerd[1467]: 2025-05-17 00:22:41.954 [INFO][5440] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11" May 17 00:22:42.005058 containerd[1467]: 2025-05-17 00:22:41.954 [INFO][5440] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11" May 17 00:22:42.005058 containerd[1467]: 2025-05-17 00:22:41.990 [INFO][5449] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11" HandleID="k8s-pod-network.43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11" Workload="172--233--222--141-k8s-calico--apiserver--7d8b46c577--g5mnw-eth0" May 17 00:22:42.005058 containerd[1467]: 2025-05-17 00:22:41.990 [INFO][5449] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:42.005058 containerd[1467]: 2025-05-17 00:22:41.990 [INFO][5449] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:42.005058 containerd[1467]: 2025-05-17 00:22:41.996 [WARNING][5449] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11" HandleID="k8s-pod-network.43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11" Workload="172--233--222--141-k8s-calico--apiserver--7d8b46c577--g5mnw-eth0" May 17 00:22:42.005058 containerd[1467]: 2025-05-17 00:22:41.996 [INFO][5449] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11" HandleID="k8s-pod-network.43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11" Workload="172--233--222--141-k8s-calico--apiserver--7d8b46c577--g5mnw-eth0" May 17 00:22:42.005058 containerd[1467]: 2025-05-17 00:22:41.998 [INFO][5449] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:42.005058 containerd[1467]: 2025-05-17 00:22:42.002 [INFO][5440] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11" May 17 00:22:42.005453 containerd[1467]: time="2025-05-17T00:22:42.005081264Z" level=info msg="TearDown network for sandbox \"43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11\" successfully" May 17 00:22:42.005453 containerd[1467]: time="2025-05-17T00:22:42.005113657Z" level=info msg="StopPodSandbox for \"43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11\" returns successfully" May 17 00:22:42.006098 containerd[1467]: time="2025-05-17T00:22:42.005630814Z" level=info msg="RemovePodSandbox for \"43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11\"" May 17 00:22:42.006098 containerd[1467]: time="2025-05-17T00:22:42.005689800Z" level=info msg="Forcibly stopping sandbox \"43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11\"" May 17 00:22:42.077537 containerd[1467]: 2025-05-17 00:22:42.041 [WARNING][5463] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--141-k8s-calico--apiserver--7d8b46c577--g5mnw-eth0", GenerateName:"calico-apiserver-7d8b46c577-", Namespace:"calico-apiserver", SelfLink:"", UID:"84bea557-3a73-4c30-b7d9-60dca7b8e6f7", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d8b46c577", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-141", ContainerID:"485b7908bb7bf7ecb8ae39c834bb846d4bf88a33d3102f31d02e9a41b1dcc1e6", Pod:"calico-apiserver-7d8b46c577-g5mnw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliedae96f96ad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:42.077537 containerd[1467]: 2025-05-17 00:22:42.041 [INFO][5463] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11" May 17 00:22:42.077537 containerd[1467]: 2025-05-17 00:22:42.041 [INFO][5463] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11" iface="eth0" netns="" May 17 00:22:42.077537 containerd[1467]: 2025-05-17 00:22:42.041 [INFO][5463] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11" May 17 00:22:42.077537 containerd[1467]: 2025-05-17 00:22:42.041 [INFO][5463] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11" May 17 00:22:42.077537 containerd[1467]: 2025-05-17 00:22:42.065 [INFO][5471] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11" HandleID="k8s-pod-network.43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11" Workload="172--233--222--141-k8s-calico--apiserver--7d8b46c577--g5mnw-eth0" May 17 00:22:42.077537 containerd[1467]: 2025-05-17 00:22:42.065 [INFO][5471] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:42.077537 containerd[1467]: 2025-05-17 00:22:42.066 [INFO][5471] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:42.077537 containerd[1467]: 2025-05-17 00:22:42.070 [WARNING][5471] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11" HandleID="k8s-pod-network.43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11" Workload="172--233--222--141-k8s-calico--apiserver--7d8b46c577--g5mnw-eth0" May 17 00:22:42.077537 containerd[1467]: 2025-05-17 00:22:42.070 [INFO][5471] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11" HandleID="k8s-pod-network.43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11" Workload="172--233--222--141-k8s-calico--apiserver--7d8b46c577--g5mnw-eth0" May 17 00:22:42.077537 containerd[1467]: 2025-05-17 00:22:42.072 [INFO][5471] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:42.077537 containerd[1467]: 2025-05-17 00:22:42.074 [INFO][5463] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11" May 17 00:22:42.078021 containerd[1467]: time="2025-05-17T00:22:42.077563577Z" level=info msg="TearDown network for sandbox \"43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11\" successfully" May 17 00:22:42.084748 containerd[1467]: time="2025-05-17T00:22:42.084630643Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:22:42.085000 containerd[1467]: time="2025-05-17T00:22:42.084899558Z" level=info msg="RemovePodSandbox \"43eef4c060ff32bad5161051508957952d582997cced28b187c4dd3d97201d11\" returns successfully" May 17 00:22:42.085843 containerd[1467]: time="2025-05-17T00:22:42.085734355Z" level=info msg="StopPodSandbox for \"47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb\"" May 17 00:22:42.149872 containerd[1467]: 2025-05-17 00:22:42.116 [WARNING][5485] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--141-k8s-goldmane--78d55f7ddc--rx28r-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"a8d2447a-ad8d-4842-8426-24362dceb355", ResourceVersion:"1106", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-141", ContainerID:"7be587148a377450037f0d3ab4ba64e4210d4682dbecd6737ec420e32a6dc15a", Pod:"goldmane-78d55f7ddc-rx28r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.24.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali057ef22c42e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:42.149872 containerd[1467]: 2025-05-17 00:22:42.116 [INFO][5485] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb" May 17 00:22:42.149872 containerd[1467]: 2025-05-17 00:22:42.116 [INFO][5485] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb" iface="eth0" netns="" May 17 00:22:42.149872 containerd[1467]: 2025-05-17 00:22:42.116 [INFO][5485] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb" May 17 00:22:42.149872 containerd[1467]: 2025-05-17 00:22:42.116 [INFO][5485] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb" May 17 00:22:42.149872 containerd[1467]: 2025-05-17 00:22:42.138 [INFO][5493] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb" HandleID="k8s-pod-network.47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb" Workload="172--233--222--141-k8s-goldmane--78d55f7ddc--rx28r-eth0" May 17 00:22:42.149872 containerd[1467]: 2025-05-17 00:22:42.138 [INFO][5493] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:42.149872 containerd[1467]: 2025-05-17 00:22:42.139 [INFO][5493] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:42.149872 containerd[1467]: 2025-05-17 00:22:42.143 [WARNING][5493] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb" HandleID="k8s-pod-network.47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb" Workload="172--233--222--141-k8s-goldmane--78d55f7ddc--rx28r-eth0" May 17 00:22:42.149872 containerd[1467]: 2025-05-17 00:22:42.143 [INFO][5493] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb" HandleID="k8s-pod-network.47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb" Workload="172--233--222--141-k8s-goldmane--78d55f7ddc--rx28r-eth0" May 17 00:22:42.149872 containerd[1467]: 2025-05-17 00:22:42.145 [INFO][5493] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:42.149872 containerd[1467]: 2025-05-17 00:22:42.147 [INFO][5485] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb" May 17 00:22:42.149872 containerd[1467]: time="2025-05-17T00:22:42.149762834Z" level=info msg="TearDown network for sandbox \"47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb\" successfully" May 17 00:22:42.149872 containerd[1467]: time="2025-05-17T00:22:42.149780346Z" level=info msg="StopPodSandbox for \"47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb\" returns successfully" May 17 00:22:42.150505 containerd[1467]: time="2025-05-17T00:22:42.150463090Z" level=info msg="RemovePodSandbox for \"47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb\"" May 17 00:22:42.150535 containerd[1467]: time="2025-05-17T00:22:42.150520225Z" level=info msg="Forcibly stopping sandbox \"47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb\"" May 17 00:22:42.219453 containerd[1467]: 2025-05-17 00:22:42.184 [WARNING][5507] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--141-k8s-goldmane--78d55f7ddc--rx28r-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"a8d2447a-ad8d-4842-8426-24362dceb355", ResourceVersion:"1106", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-141", ContainerID:"7be587148a377450037f0d3ab4ba64e4210d4682dbecd6737ec420e32a6dc15a", Pod:"goldmane-78d55f7ddc-rx28r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.24.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali057ef22c42e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:42.219453 containerd[1467]: 2025-05-17 00:22:42.184 [INFO][5507] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb" May 17 00:22:42.219453 containerd[1467]: 2025-05-17 00:22:42.184 [INFO][5507] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb" iface="eth0" netns="" May 17 00:22:42.219453 containerd[1467]: 2025-05-17 00:22:42.184 [INFO][5507] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb" May 17 00:22:42.219453 containerd[1467]: 2025-05-17 00:22:42.184 [INFO][5507] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb" May 17 00:22:42.219453 containerd[1467]: 2025-05-17 00:22:42.207 [INFO][5515] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb" HandleID="k8s-pod-network.47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb" Workload="172--233--222--141-k8s-goldmane--78d55f7ddc--rx28r-eth0" May 17 00:22:42.219453 containerd[1467]: 2025-05-17 00:22:42.207 [INFO][5515] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:42.219453 containerd[1467]: 2025-05-17 00:22:42.207 [INFO][5515] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:42.219453 containerd[1467]: 2025-05-17 00:22:42.212 [WARNING][5515] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb" HandleID="k8s-pod-network.47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb" Workload="172--233--222--141-k8s-goldmane--78d55f7ddc--rx28r-eth0" May 17 00:22:42.219453 containerd[1467]: 2025-05-17 00:22:42.212 [INFO][5515] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb" HandleID="k8s-pod-network.47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb" Workload="172--233--222--141-k8s-goldmane--78d55f7ddc--rx28r-eth0" May 17 00:22:42.219453 containerd[1467]: 2025-05-17 00:22:42.214 [INFO][5515] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:42.219453 containerd[1467]: 2025-05-17 00:22:42.216 [INFO][5507] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb" May 17 00:22:42.221770 containerd[1467]: time="2025-05-17T00:22:42.219889669Z" level=info msg="TearDown network for sandbox \"47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb\" successfully" May 17 00:22:42.224002 containerd[1467]: time="2025-05-17T00:22:42.223979079Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:22:42.224142 containerd[1467]: time="2025-05-17T00:22:42.224127422Z" level=info msg="RemovePodSandbox \"47089f3127aa7b08fd29058bf9f5191880960a749de8fd3f0e52b48e2fd142fb\" returns successfully" May 17 00:22:42.224850 containerd[1467]: time="2025-05-17T00:22:42.224813927Z" level=info msg="StopPodSandbox for \"9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b\"" May 17 00:22:42.298344 containerd[1467]: 2025-05-17 00:22:42.263 [WARNING][5529] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b" WorkloadEndpoint="172--233--222--141-k8s-whisker--6c76d77cb8--5vbxx-eth0" May 17 00:22:42.298344 containerd[1467]: 2025-05-17 00:22:42.263 [INFO][5529] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b" May 17 00:22:42.298344 containerd[1467]: 2025-05-17 00:22:42.263 [INFO][5529] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b" iface="eth0" netns="" May 17 00:22:42.298344 containerd[1467]: 2025-05-17 00:22:42.263 [INFO][5529] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b" May 17 00:22:42.298344 containerd[1467]: 2025-05-17 00:22:42.263 [INFO][5529] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b" May 17 00:22:42.298344 containerd[1467]: 2025-05-17 00:22:42.286 [INFO][5536] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b" HandleID="k8s-pod-network.9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b" Workload="172--233--222--141-k8s-whisker--6c76d77cb8--5vbxx-eth0" May 17 00:22:42.298344 containerd[1467]: 2025-05-17 00:22:42.286 [INFO][5536] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:42.298344 containerd[1467]: 2025-05-17 00:22:42.286 [INFO][5536] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:42.298344 containerd[1467]: 2025-05-17 00:22:42.291 [WARNING][5536] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b" HandleID="k8s-pod-network.9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b" Workload="172--233--222--141-k8s-whisker--6c76d77cb8--5vbxx-eth0" May 17 00:22:42.298344 containerd[1467]: 2025-05-17 00:22:42.291 [INFO][5536] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b" HandleID="k8s-pod-network.9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b" Workload="172--233--222--141-k8s-whisker--6c76d77cb8--5vbxx-eth0" May 17 00:22:42.298344 containerd[1467]: 2025-05-17 00:22:42.293 [INFO][5536] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:42.298344 containerd[1467]: 2025-05-17 00:22:42.296 [INFO][5529] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b" May 17 00:22:42.299878 containerd[1467]: time="2025-05-17T00:22:42.298467779Z" level=info msg="TearDown network for sandbox \"9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b\" successfully" May 17 00:22:42.299878 containerd[1467]: time="2025-05-17T00:22:42.298490971Z" level=info msg="StopPodSandbox for \"9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b\" returns successfully" May 17 00:22:42.299878 containerd[1467]: time="2025-05-17T00:22:42.299102158Z" level=info msg="RemovePodSandbox for \"9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b\"" May 17 00:22:42.299878 containerd[1467]: time="2025-05-17T00:22:42.299149752Z" level=info msg="Forcibly stopping sandbox \"9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b\"" May 17 00:22:42.364294 containerd[1467]: 2025-05-17 00:22:42.330 [WARNING][5550] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b" WorkloadEndpoint="172--233--222--141-k8s-whisker--6c76d77cb8--5vbxx-eth0" May 17 00:22:42.364294 containerd[1467]: 2025-05-17 00:22:42.330 [INFO][5550] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b" May 17 00:22:42.364294 containerd[1467]: 2025-05-17 00:22:42.330 [INFO][5550] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b" iface="eth0" netns="" May 17 00:22:42.364294 containerd[1467]: 2025-05-17 00:22:42.330 [INFO][5550] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b" May 17 00:22:42.364294 containerd[1467]: 2025-05-17 00:22:42.330 [INFO][5550] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b" May 17 00:22:42.364294 containerd[1467]: 2025-05-17 00:22:42.353 [INFO][5557] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b" HandleID="k8s-pod-network.9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b" Workload="172--233--222--141-k8s-whisker--6c76d77cb8--5vbxx-eth0" May 17 00:22:42.364294 containerd[1467]: 2025-05-17 00:22:42.353 [INFO][5557] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:42.364294 containerd[1467]: 2025-05-17 00:22:42.353 [INFO][5557] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:42.364294 containerd[1467]: 2025-05-17 00:22:42.358 [WARNING][5557] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b" HandleID="k8s-pod-network.9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b" Workload="172--233--222--141-k8s-whisker--6c76d77cb8--5vbxx-eth0" May 17 00:22:42.364294 containerd[1467]: 2025-05-17 00:22:42.358 [INFO][5557] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b" HandleID="k8s-pod-network.9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b" Workload="172--233--222--141-k8s-whisker--6c76d77cb8--5vbxx-eth0" May 17 00:22:42.364294 containerd[1467]: 2025-05-17 00:22:42.359 [INFO][5557] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:42.364294 containerd[1467]: 2025-05-17 00:22:42.362 [INFO][5550] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b" May 17 00:22:42.364649 containerd[1467]: time="2025-05-17T00:22:42.364337809Z" level=info msg="TearDown network for sandbox \"9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b\" successfully" May 17 00:22:42.368282 containerd[1467]: time="2025-05-17T00:22:42.368242731Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:22:42.368363 containerd[1467]: time="2025-05-17T00:22:42.368305817Z" level=info msg="RemovePodSandbox \"9f1fab09410344d6062b2d821415ac6195d1eb4d81e326f0d00145583b2bd36b\" returns successfully" May 17 00:22:42.368787 containerd[1467]: time="2025-05-17T00:22:42.368762820Z" level=info msg="StopPodSandbox for \"cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050\"" May 17 00:22:42.437980 containerd[1467]: 2025-05-17 00:22:42.403 [WARNING][5571] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--141-k8s-coredns--674b8bbfcf--jv4r6-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"48992ca7-0880-469a-be33-3fed00473f03", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-141", ContainerID:"34f84c7eb21ec5eed59cea23864466c2bbb2b833c82b15ae1fc43d795227575f", Pod:"coredns-674b8bbfcf-jv4r6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7e05a77e5e6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:42.437980 containerd[1467]: 2025-05-17 00:22:42.403 [INFO][5571] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050" May 17 00:22:42.437980 containerd[1467]: 2025-05-17 00:22:42.403 [INFO][5571] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050" iface="eth0" netns="" May 17 00:22:42.437980 containerd[1467]: 2025-05-17 00:22:42.403 [INFO][5571] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050" May 17 00:22:42.437980 containerd[1467]: 2025-05-17 00:22:42.403 [INFO][5571] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050" May 17 00:22:42.437980 containerd[1467]: 2025-05-17 00:22:42.426 [INFO][5580] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050" HandleID="k8s-pod-network.cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050" Workload="172--233--222--141-k8s-coredns--674b8bbfcf--jv4r6-eth0" May 17 00:22:42.437980 containerd[1467]: 2025-05-17 00:22:42.426 [INFO][5580] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:42.437980 containerd[1467]: 2025-05-17 00:22:42.426 [INFO][5580] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:42.437980 containerd[1467]: 2025-05-17 00:22:42.431 [WARNING][5580] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050" HandleID="k8s-pod-network.cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050" Workload="172--233--222--141-k8s-coredns--674b8bbfcf--jv4r6-eth0" May 17 00:22:42.437980 containerd[1467]: 2025-05-17 00:22:42.431 [INFO][5580] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050" HandleID="k8s-pod-network.cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050" Workload="172--233--222--141-k8s-coredns--674b8bbfcf--jv4r6-eth0" May 17 00:22:42.437980 containerd[1467]: 2025-05-17 00:22:42.432 [INFO][5580] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:42.437980 containerd[1467]: 2025-05-17 00:22:42.435 [INFO][5571] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050" May 17 00:22:42.437980 containerd[1467]: time="2025-05-17T00:22:42.437815195Z" level=info msg="TearDown network for sandbox \"cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050\" successfully" May 17 00:22:42.437980 containerd[1467]: time="2025-05-17T00:22:42.437845468Z" level=info msg="StopPodSandbox for \"cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050\" returns successfully" May 17 00:22:42.438498 containerd[1467]: time="2025-05-17T00:22:42.438416761Z" level=info msg="RemovePodSandbox for \"cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050\"" May 17 00:22:42.438498 containerd[1467]: time="2025-05-17T00:22:42.438461535Z" level=info msg="Forcibly stopping sandbox \"cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050\"" May 17 00:22:42.508742 containerd[1467]: 2025-05-17 00:22:42.471 [WARNING][5594] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--141-k8s-coredns--674b8bbfcf--jv4r6-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"48992ca7-0880-469a-be33-3fed00473f03", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-141", ContainerID:"34f84c7eb21ec5eed59cea23864466c2bbb2b833c82b15ae1fc43d795227575f", Pod:"coredns-674b8bbfcf-jv4r6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7e05a77e5e6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:42.508742 containerd[1467]: 2025-05-17 00:22:42.471 [INFO][5594] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050" May 17 00:22:42.508742 containerd[1467]: 2025-05-17 00:22:42.471 [INFO][5594] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050" iface="eth0" netns="" May 17 00:22:42.508742 containerd[1467]: 2025-05-17 00:22:42.471 [INFO][5594] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050" May 17 00:22:42.508742 containerd[1467]: 2025-05-17 00:22:42.471 [INFO][5594] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050" May 17 00:22:42.508742 containerd[1467]: 2025-05-17 00:22:42.496 [INFO][5601] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050" HandleID="k8s-pod-network.cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050" Workload="172--233--222--141-k8s-coredns--674b8bbfcf--jv4r6-eth0" May 17 00:22:42.508742 containerd[1467]: 2025-05-17 00:22:42.497 [INFO][5601] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:42.508742 containerd[1467]: 2025-05-17 00:22:42.497 [INFO][5601] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:42.508742 containerd[1467]: 2025-05-17 00:22:42.502 [WARNING][5601] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050" HandleID="k8s-pod-network.cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050" Workload="172--233--222--141-k8s-coredns--674b8bbfcf--jv4r6-eth0" May 17 00:22:42.508742 containerd[1467]: 2025-05-17 00:22:42.502 [INFO][5601] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050" HandleID="k8s-pod-network.cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050" Workload="172--233--222--141-k8s-coredns--674b8bbfcf--jv4r6-eth0" May 17 00:22:42.508742 containerd[1467]: 2025-05-17 00:22:42.504 [INFO][5601] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:42.508742 containerd[1467]: 2025-05-17 00:22:42.506 [INFO][5594] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050" May 17 00:22:42.508742 containerd[1467]: time="2025-05-17T00:22:42.508636875Z" level=info msg="TearDown network for sandbox \"cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050\" successfully" May 17 00:22:42.514143 containerd[1467]: time="2025-05-17T00:22:42.514116873Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:22:42.514206 containerd[1467]: time="2025-05-17T00:22:42.514176488Z" level=info msg="RemovePodSandbox \"cb4c7b8838b851a761539def7bb60c11706b88facd03f3b715e86c5bfc4e7050\" returns successfully" May 17 00:22:46.514303 systemd[1]: run-containerd-runc-k8s.io-d294241c660b821034eb13dc9f70ecf4a7f396be5210ecfec79990e15934e6d7-runc.AyePek.mount: Deactivated successfully. May 17 00:22:48.275362 kubelet[2505]: E0517 00:22:48.275297 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-rx28r" podUID="a8d2447a-ad8d-4842-8426-24362dceb355" May 17 00:22:52.275121 kubelet[2505]: E0517 00:22:52.274762 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:22:52.276946 containerd[1467]: time="2025-05-17T00:22:52.276463462Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:22:52.377036 containerd[1467]: time="2025-05-17T00:22:52.376956149Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:22:52.378301 containerd[1467]: time="2025-05-17T00:22:52.378154608Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:22:52.378301 containerd[1467]: time="2025-05-17T00:22:52.378216132Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:22:52.378413 kubelet[2505]: E0517 00:22:52.378370 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:22:52.378523 kubelet[2505]: E0517 00:22:52.378424 2505 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:22:52.378595 kubelet[2505]: E0517 00:22:52.378554 2505 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:cd29bf53547f4577a2cbbab64c8bad8c,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4nscn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79c6b7464b-t786w_calico-system(fbe987ff-c3c8-4769-8d91-b50b803b038b): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:22:52.380715 containerd[1467]: time="2025-05-17T00:22:52.380654215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:22:52.491151 containerd[1467]: time="2025-05-17T00:22:52.491047161Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:22:52.492212 containerd[1467]: time="2025-05-17T00:22:52.492160507Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:22:52.493078 containerd[1467]: time="2025-05-17T00:22:52.492227731Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:22:52.493714 kubelet[2505]: E0517 00:22:52.492429 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:22:52.493714 kubelet[2505]: E0517 00:22:52.492489 2505 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:22:52.493714 kubelet[2505]: E0517 00:22:52.492656 2505 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4nscn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79c6b7464b-t786w_calico-system(fbe987ff-c3c8-4769-8d91-b50b803b038b): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:22:52.494004 kubelet[2505]: E0517 00:22:52.493951 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-79c6b7464b-t786w" podUID="fbe987ff-c3c8-4769-8d91-b50b803b038b" May 17 00:22:58.273960 kubelet[2505]: E0517 00:22:58.273895 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:23:00.963442 kubelet[2505]: I0517 00:23:00.963366 2505 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:23:02.276464 containerd[1467]: time="2025-05-17T00:23:02.275790976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:23:02.400985 containerd[1467]: time="2025-05-17T00:23:02.400919201Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:23:02.401823 containerd[1467]: time="2025-05-17T00:23:02.401777507Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:23:02.401915 containerd[1467]: time="2025-05-17T00:23:02.401871692Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:23:02.402717 kubelet[2505]: E0517 00:23:02.402096 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:23:02.402717 kubelet[2505]: E0517 00:23:02.402161 2505 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:23:02.402717 kubelet[2505]: E0517 00:23:02.402316 2505 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-frntd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-rx28r_calico-system(a8d2447a-ad8d-4842-8426-24362dceb355): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:23:02.403821 kubelet[2505]: E0517 00:23:02.403742 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-rx28r" podUID="a8d2447a-ad8d-4842-8426-24362dceb355" May 17 00:23:03.279859 kubelet[2505]: E0517 00:23:03.278770 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-79c6b7464b-t786w" podUID="fbe987ff-c3c8-4769-8d91-b50b803b038b" May 17 00:23:10.273998 kubelet[2505]: E0517 00:23:10.273950 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:23:15.277863 kubelet[2505]: E0517 00:23:15.276497 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-rx28r" podUID="a8d2447a-ad8d-4842-8426-24362dceb355" May 17 00:23:16.514511 systemd[1]: run-containerd-runc-k8s.io-d294241c660b821034eb13dc9f70ecf4a7f396be5210ecfec79990e15934e6d7-runc.mLrhow.mount: Deactivated successfully. May 17 00:23:18.275856 kubelet[2505]: E0517 00:23:18.275602 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-79c6b7464b-t786w" podUID="fbe987ff-c3c8-4769-8d91-b50b803b038b" May 17 00:23:19.274967 kubelet[2505]: E0517 00:23:19.274197 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:23:26.274952 kubelet[2505]: E0517 00:23:26.274889 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-rx28r" podUID="a8d2447a-ad8d-4842-8426-24362dceb355" May 17 00:23:29.275767 kubelet[2505]: E0517 00:23:29.275553 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:23:31.274997 kubelet[2505]: E0517 00:23:31.273942 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:23:32.275249 kubelet[2505]: E0517 00:23:32.275163 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-79c6b7464b-t786w" podUID="fbe987ff-c3c8-4769-8d91-b50b803b038b" May 17 00:23:32.323039 systemd[1]: run-containerd-runc-k8s.io-2b6c8dbe9e95f2749d752a79a296c6442833421db7670b980d78873797543623-runc.CSVStX.mount: Deactivated successfully. May 17 00:23:39.274836 kubelet[2505]: E0517 00:23:39.274500 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:23:40.274730 kubelet[2505]: E0517 00:23:40.274659 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-rx28r" podUID="a8d2447a-ad8d-4842-8426-24362dceb355" May 17 00:23:42.857056 systemd[1]: Started sshd@8-172.233.222.141:22-139.178.89.65:56822.service - OpenSSH per-connection server daemon (139.178.89.65:56822). May 17 00:23:43.198786 sshd[5740]: Accepted publickey for core from 139.178.89.65 port 56822 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:23:43.200819 sshd[5740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:43.205811 systemd-logind[1450]: New session 8 of user core. May 17 00:23:43.211781 systemd[1]: Started session-8.scope - Session 8 of User core. May 17 00:23:43.529832 sshd[5740]: pam_unix(sshd:session): session closed for user core May 17 00:23:43.532864 systemd-logind[1450]: Session 8 logged out. Waiting for processes to exit. May 17 00:23:43.536184 systemd[1]: sshd@8-172.233.222.141:22-139.178.89.65:56822.service: Deactivated successfully. May 17 00:23:43.540251 systemd[1]: session-8.scope: Deactivated successfully. May 17 00:23:43.543086 systemd-logind[1450]: Removed session 8. May 17 00:23:45.276910 containerd[1467]: time="2025-05-17T00:23:45.276379405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:23:45.384071 containerd[1467]: time="2025-05-17T00:23:45.384019741Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:23:45.387192 containerd[1467]: time="2025-05-17T00:23:45.387151381Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:23:45.387192 containerd[1467]: time="2025-05-17T00:23:45.387217714Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:23:45.387739 kubelet[2505]: E0517 00:23:45.387545 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:23:45.387739 kubelet[2505]: E0517 00:23:45.387613 2505 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:23:45.389176 kubelet[2505]: E0517 00:23:45.389124 2505 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:cd29bf53547f4577a2cbbab64c8bad8c,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4nscn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79c6b7464b-t786w_calico-system(fbe987ff-c3c8-4769-8d91-b50b803b038b): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:23:45.391690 containerd[1467]: time="2025-05-17T00:23:45.391031684Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:23:45.493860 containerd[1467]: time="2025-05-17T00:23:45.493751528Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:23:45.494998 containerd[1467]: time="2025-05-17T00:23:45.494841439Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:23:45.494998 containerd[1467]: time="2025-05-17T00:23:45.494943482Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:23:45.495764 kubelet[2505]: E0517 00:23:45.495442 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:23:45.495764 kubelet[2505]: E0517 00:23:45.495523 2505 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:23:45.496169 kubelet[2505]: E0517 00:23:45.495946 2505 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4nscn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79c6b7464b-t786w_calico-system(fbe987ff-c3c8-4769-8d91-b50b803b038b): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:23:45.497146 kubelet[2505]: E0517 00:23:45.497108 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-79c6b7464b-t786w" podUID="fbe987ff-c3c8-4769-8d91-b50b803b038b" May 17 00:23:48.587817 systemd[1]: Started sshd@9-172.233.222.141:22-139.178.89.65:47620.service - OpenSSH per-connection server daemon (139.178.89.65:47620). May 17 00:23:48.913381 sshd[5775]: Accepted publickey for core from 139.178.89.65 port 47620 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:23:48.915269 sshd[5775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:48.920020 systemd-logind[1450]: New session 9 of user core. May 17 00:23:48.928795 systemd[1]: Started session-9.scope - Session 9 of User core. May 17 00:23:49.254193 sshd[5775]: pam_unix(sshd:session): session closed for user core May 17 00:23:49.262375 systemd-logind[1450]: Session 9 logged out. Waiting for processes to exit. May 17 00:23:49.263253 systemd[1]: sshd@9-172.233.222.141:22-139.178.89.65:47620.service: Deactivated successfully. May 17 00:23:49.265853 systemd[1]: session-9.scope: Deactivated successfully. May 17 00:23:49.272310 systemd-logind[1450]: Removed session 9. May 17 00:23:49.318069 systemd[1]: Started sshd@10-172.233.222.141:22-139.178.89.65:47622.service - OpenSSH per-connection server daemon (139.178.89.65:47622). May 17 00:23:49.638380 sshd[5789]: Accepted publickey for core from 139.178.89.65 port 47622 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:23:49.639480 sshd[5789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:49.643419 systemd-logind[1450]: New session 10 of user core. May 17 00:23:49.649810 systemd[1]: Started session-10.scope - Session 10 of User core. May 17 00:23:49.955878 sshd[5789]: pam_unix(sshd:session): session closed for user core May 17 00:23:49.958914 systemd[1]: sshd@10-172.233.222.141:22-139.178.89.65:47622.service: Deactivated successfully. May 17 00:23:49.961070 systemd[1]: session-10.scope: Deactivated successfully. May 17 00:23:49.962577 systemd-logind[1450]: Session 10 logged out. Waiting for processes to exit. May 17 00:23:49.964251 systemd-logind[1450]: Removed session 10. May 17 00:23:50.017291 systemd[1]: Started sshd@11-172.233.222.141:22-139.178.89.65:47624.service - OpenSSH per-connection server daemon (139.178.89.65:47624). May 17 00:23:50.354386 sshd[5802]: Accepted publickey for core from 139.178.89.65 port 47624 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:23:50.355174 sshd[5802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:50.359324 systemd-logind[1450]: New session 11 of user core. May 17 00:23:50.362777 systemd[1]: Started session-11.scope - Session 11 of User core. May 17 00:23:50.652435 sshd[5802]: pam_unix(sshd:session): session closed for user core May 17 00:23:50.656570 systemd-logind[1450]: Session 11 logged out. Waiting for processes to exit. May 17 00:23:50.657348 systemd[1]: sshd@11-172.233.222.141:22-139.178.89.65:47624.service: Deactivated successfully. May 17 00:23:50.659265 systemd[1]: session-11.scope: Deactivated successfully. May 17 00:23:50.660445 systemd-logind[1450]: Removed session 11. May 17 00:23:55.275064 containerd[1467]: time="2025-05-17T00:23:55.274794881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:23:55.381004 containerd[1467]: time="2025-05-17T00:23:55.380956768Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:23:55.381806 containerd[1467]: time="2025-05-17T00:23:55.381767938Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:23:55.381918 containerd[1467]: time="2025-05-17T00:23:55.381837960Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:23:55.381986 kubelet[2505]: E0517 00:23:55.381945 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:23:55.381986 kubelet[2505]: E0517 00:23:55.381979 2505 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:23:55.382291 kubelet[2505]: E0517 00:23:55.382071 2505 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-frntd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-rx28r_calico-system(a8d2447a-ad8d-4842-8426-24362dceb355): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:23:55.383430 kubelet[2505]: E0517 00:23:55.383397 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-rx28r" podUID="a8d2447a-ad8d-4842-8426-24362dceb355" May 17 00:23:55.717084 systemd[1]: Started sshd@12-172.233.222.141:22-139.178.89.65:47638.service - OpenSSH per-connection server daemon (139.178.89.65:47638). May 17 00:23:56.047111 sshd[5831]: Accepted publickey for core from 139.178.89.65 port 47638 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:23:56.048457 sshd[5831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:56.052867 systemd-logind[1450]: New session 12 of user core. May 17 00:23:56.055804 systemd[1]: Started session-12.scope - Session 12 of User core. May 17 00:23:56.352633 sshd[5831]: pam_unix(sshd:session): session closed for user core May 17 00:23:56.356964 systemd[1]: sshd@12-172.233.222.141:22-139.178.89.65:47638.service: Deactivated successfully. May 17 00:23:56.359300 systemd[1]: session-12.scope: Deactivated successfully. May 17 00:23:56.360389 systemd-logind[1450]: Session 12 logged out. Waiting for processes to exit. May 17 00:23:56.361509 systemd-logind[1450]: Removed session 12. May 17 00:23:57.275459 kubelet[2505]: E0517 00:23:57.275383 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-79c6b7464b-t786w" podUID="fbe987ff-c3c8-4769-8d91-b50b803b038b" May 17 00:23:59.274319 kubelet[2505]: E0517 00:23:59.274045 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:24:01.415450 systemd[1]: Started sshd@13-172.233.222.141:22-139.178.89.65:60928.service - OpenSSH per-connection server daemon (139.178.89.65:60928). May 17 00:24:01.754864 sshd[5851]: Accepted publickey for core from 139.178.89.65 port 60928 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:24:01.756444 sshd[5851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:01.761332 systemd-logind[1450]: New session 13 of user core. May 17 00:24:01.765832 systemd[1]: Started session-13.scope - Session 13 of User core. May 17 00:24:02.059396 sshd[5851]: pam_unix(sshd:session): session closed for user core May 17 00:24:02.064365 systemd[1]: sshd@13-172.233.222.141:22-139.178.89.65:60928.service: Deactivated successfully. May 17 00:24:02.067107 systemd[1]: session-13.scope: Deactivated successfully. May 17 00:24:02.068507 systemd-logind[1450]: Session 13 logged out. Waiting for processes to exit. May 17 00:24:02.069593 systemd-logind[1450]: Removed session 13. May 17 00:24:06.275686 kubelet[2505]: E0517 00:24:06.274074 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:24:07.121456 systemd[1]: Started sshd@14-172.233.222.141:22-139.178.89.65:38796.service - OpenSSH per-connection server daemon (139.178.89.65:38796). May 17 00:24:07.460762 sshd[5883]: Accepted publickey for core from 139.178.89.65 port 38796 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:24:07.462598 sshd[5883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:07.468925 systemd-logind[1450]: New session 14 of user core. May 17 00:24:07.471987 systemd[1]: Started session-14.scope - Session 14 of User core. May 17 00:24:07.770124 sshd[5883]: pam_unix(sshd:session): session closed for user core May 17 00:24:07.777145 systemd[1]: sshd@14-172.233.222.141:22-139.178.89.65:38796.service: Deactivated successfully. May 17 00:24:07.781593 systemd[1]: session-14.scope: Deactivated successfully. May 17 00:24:07.787314 systemd-logind[1450]: Session 14 logged out. Waiting for processes to exit. May 17 00:24:07.788805 systemd-logind[1450]: Removed session 14. May 17 00:24:07.834896 systemd[1]: Started sshd@15-172.233.222.141:22-139.178.89.65:38798.service - OpenSSH per-connection server daemon (139.178.89.65:38798). May 17 00:24:08.165293 sshd[5896]: Accepted publickey for core from 139.178.89.65 port 38798 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:24:08.166266 sshd[5896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:08.170625 systemd-logind[1450]: New session 15 of user core. May 17 00:24:08.174783 systemd[1]: Started session-15.scope - Session 15 of User core. May 17 00:24:08.595879 sshd[5896]: pam_unix(sshd:session): session closed for user core May 17 00:24:08.599876 systemd[1]: sshd@15-172.233.222.141:22-139.178.89.65:38798.service: Deactivated successfully. May 17 00:24:08.602649 systemd[1]: session-15.scope: Deactivated successfully. May 17 00:24:08.603719 systemd-logind[1450]: Session 15 logged out. Waiting for processes to exit. May 17 00:24:08.604873 systemd-logind[1450]: Removed session 15. May 17 00:24:08.658210 systemd[1]: Started sshd@16-172.233.222.141:22-139.178.89.65:38800.service - OpenSSH per-connection server daemon (139.178.89.65:38800). May 17 00:24:08.981908 sshd[5906]: Accepted publickey for core from 139.178.89.65 port 38800 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:24:08.983734 sshd[5906]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:08.989941 systemd-logind[1450]: New session 16 of user core. May 17 00:24:08.998817 systemd[1]: Started session-16.scope - Session 16 of User core. May 17 00:24:09.278487 kubelet[2505]: E0517 00:24:09.278165 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-rx28r" podUID="a8d2447a-ad8d-4842-8426-24362dceb355" May 17 00:24:09.885181 sshd[5906]: pam_unix(sshd:session): session closed for user core May 17 00:24:09.890290 systemd[1]: sshd@16-172.233.222.141:22-139.178.89.65:38800.service: Deactivated successfully. May 17 00:24:09.892568 systemd[1]: session-16.scope: Deactivated successfully. May 17 00:24:09.893398 systemd-logind[1450]: Session 16 logged out. Waiting for processes to exit. May 17 00:24:09.894656 systemd-logind[1450]: Removed session 16. May 17 00:24:09.951874 systemd[1]: Started sshd@17-172.233.222.141:22-139.178.89.65:38812.service - OpenSSH per-connection server daemon (139.178.89.65:38812). May 17 00:24:10.284775 sshd[5924]: Accepted publickey for core from 139.178.89.65 port 38812 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:24:10.286775 sshd[5924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:10.292110 systemd-logind[1450]: New session 17 of user core. May 17 00:24:10.295782 systemd[1]: Started session-17.scope - Session 17 of User core. May 17 00:24:10.679418 sshd[5924]: pam_unix(sshd:session): session closed for user core May 17 00:24:10.683860 systemd[1]: sshd@17-172.233.222.141:22-139.178.89.65:38812.service: Deactivated successfully. May 17 00:24:10.686232 systemd[1]: session-17.scope: Deactivated successfully. May 17 00:24:10.687126 systemd-logind[1450]: Session 17 logged out. Waiting for processes to exit. May 17 00:24:10.688321 systemd-logind[1450]: Removed session 17. May 17 00:24:10.736336 systemd[1]: Started sshd@18-172.233.222.141:22-139.178.89.65:38816.service - OpenSSH per-connection server daemon (139.178.89.65:38816). May 17 00:24:11.063468 sshd[5935]: Accepted publickey for core from 139.178.89.65 port 38816 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:24:11.065042 sshd[5935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:11.069632 systemd-logind[1450]: New session 18 of user core. May 17 00:24:11.074810 systemd[1]: Started session-18.scope - Session 18 of User core. May 17 00:24:11.359041 sshd[5935]: pam_unix(sshd:session): session closed for user core May 17 00:24:11.363726 systemd[1]: sshd@18-172.233.222.141:22-139.178.89.65:38816.service: Deactivated successfully. May 17 00:24:11.364038 systemd-logind[1450]: Session 18 logged out. Waiting for processes to exit. May 17 00:24:11.366230 systemd[1]: session-18.scope: Deactivated successfully. May 17 00:24:11.368076 systemd-logind[1450]: Removed session 18. May 17 00:24:12.277036 kubelet[2505]: E0517 00:24:12.276729 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-79c6b7464b-t786w" podUID="fbe987ff-c3c8-4769-8d91-b50b803b038b" May 17 00:24:16.425108 systemd[1]: Started sshd@19-172.233.222.141:22-139.178.89.65:38820.service - OpenSSH per-connection server daemon (139.178.89.65:38820). May 17 00:24:16.520584 systemd[1]: run-containerd-runc-k8s.io-d294241c660b821034eb13dc9f70ecf4a7f396be5210ecfec79990e15934e6d7-runc.kHk5to.mount: Deactivated successfully. May 17 00:24:16.749891 sshd[5970]: Accepted publickey for core from 139.178.89.65 port 38820 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:24:16.751419 sshd[5970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:16.755792 systemd-logind[1450]: New session 19 of user core. May 17 00:24:16.762790 systemd[1]: Started session-19.scope - Session 19 of User core. May 17 00:24:17.084300 sshd[5970]: pam_unix(sshd:session): session closed for user core May 17 00:24:17.088202 systemd[1]: sshd@19-172.233.222.141:22-139.178.89.65:38820.service: Deactivated successfully. May 17 00:24:17.091460 systemd[1]: session-19.scope: Deactivated successfully. May 17 00:24:17.093906 systemd-logind[1450]: Session 19 logged out. Waiting for processes to exit. May 17 00:24:17.095449 systemd-logind[1450]: Removed session 19. May 17 00:24:20.275108 kubelet[2505]: E0517 00:24:20.275015 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-rx28r" podUID="a8d2447a-ad8d-4842-8426-24362dceb355" May 17 00:24:22.144600 systemd[1]: Started sshd@20-172.233.222.141:22-139.178.89.65:36456.service - OpenSSH per-connection server daemon (139.178.89.65:36456). May 17 00:24:22.475608 sshd[6006]: Accepted publickey for core from 139.178.89.65 port 36456 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:24:22.476213 sshd[6006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:22.480756 systemd-logind[1450]: New session 20 of user core. May 17 00:24:22.486784 systemd[1]: Started session-20.scope - Session 20 of User core. May 17 00:24:22.767506 sshd[6006]: pam_unix(sshd:session): session closed for user core May 17 00:24:22.771914 systemd[1]: sshd@20-172.233.222.141:22-139.178.89.65:36456.service: Deactivated successfully. May 17 00:24:22.774586 systemd[1]: session-20.scope: Deactivated successfully. May 17 00:24:22.775290 systemd-logind[1450]: Session 20 logged out. Waiting for processes to exit. May 17 00:24:22.776162 systemd-logind[1450]: Removed session 20. May 17 00:24:23.276237 kubelet[2505]: E0517 00:24:23.276189 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-79c6b7464b-t786w" podUID="fbe987ff-c3c8-4769-8d91-b50b803b038b"