May 17 00:29:43.839856 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri May 16 22:44:56 -00 2025 May 17 00:29:43.839873 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:29:43.839880 kernel: BIOS-provided physical RAM map: May 17 00:29:43.839885 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable May 17 00:29:43.839889 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved May 17 00:29:43.839896 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 17 00:29:43.839901 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable May 17 00:29:43.839906 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved May 17 00:29:43.839911 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 17 00:29:43.839915 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 17 00:29:43.839920 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 17 00:29:43.839924 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 17 00:29:43.839929 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable May 17 00:29:43.839936 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 17 00:29:43.839941 kernel: NX (Execute Disable) protection: active May 17 00:29:43.839946 kernel: APIC: Static calls initialized May 17 00:29:43.839951 kernel: SMBIOS 2.8 present. May 17 00:29:43.839955 kernel: DMI: Linode Compute Instance, BIOS Not Specified May 17 00:29:43.839960 kernel: Hypervisor detected: KVM May 17 00:29:43.839967 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 17 00:29:43.839972 kernel: kvm-clock: using sched offset of 4091612550 cycles May 17 00:29:43.839977 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 17 00:29:43.839982 kernel: tsc: Detected 2000.000 MHz processor May 17 00:29:43.839987 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 17 00:29:43.839992 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 17 00:29:43.839997 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 May 17 00:29:43.840002 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 17 00:29:43.840007 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 17 00:29:43.840015 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 May 17 00:29:43.840019 kernel: Using GB pages for direct mapping May 17 00:29:43.840024 kernel: ACPI: Early table checksum verification disabled May 17 00:29:43.840029 kernel: ACPI: RSDP 0x00000000000F51B0 000014 (v00 BOCHS ) May 17 00:29:43.840034 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:29:43.840039 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:29:43.840044 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:29:43.840049 kernel: ACPI: FACS 0x000000007FFE0000 000040 May 17 00:29:43.840053 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:29:43.840061 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:29:43.840066 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:29:43.840071 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:29:43.840079 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] May 17 00:29:43.840084 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] May 17 00:29:43.840089 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] May 17 00:29:43.840097 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] May 17 00:29:43.840102 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] May 17 00:29:43.840107 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] May 17 00:29:43.840112 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] May 17 00:29:43.840117 kernel: No NUMA configuration found May 17 00:29:43.840122 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] May 17 00:29:43.840127 kernel: NODE_DATA(0) allocated [mem 0x17fff8000-0x17fffdfff] May 17 00:29:43.840132 kernel: Zone ranges: May 17 00:29:43.840141 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 17 00:29:43.840146 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 17 00:29:43.840151 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] May 17 00:29:43.840156 kernel: Movable zone start for each node May 17 00:29:43.840161 kernel: Early memory node ranges May 17 00:29:43.840166 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 17 00:29:43.840171 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] May 17 00:29:43.840176 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] May 17 00:29:43.840181 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] May 17 00:29:43.840186 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 17 00:29:43.840193 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 17 00:29:43.840198 kernel: On node 0, zone Normal: 35 pages in unavailable ranges May 17 00:29:43.840204 kernel: ACPI: PM-Timer IO Port: 0x608 May 17 00:29:43.840209 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 17 00:29:43.840214 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 17 00:29:43.840219 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 17 00:29:43.840224 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 17 00:29:43.840229 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 17 00:29:43.840234 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 17 00:29:43.840242 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 17 00:29:43.840247 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 17 00:29:43.840252 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 17 00:29:43.840257 kernel: TSC deadline timer available May 17 00:29:43.840262 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 17 00:29:43.840267 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 17 00:29:43.840272 kernel: kvm-guest: KVM setup pv remote TLB flush May 17 00:29:43.840277 kernel: kvm-guest: setup PV sched yield May 17 00:29:43.840282 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 17 00:29:43.840290 kernel: Booting paravirtualized kernel on KVM May 17 00:29:43.840295 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 17 00:29:43.840300 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 17 00:29:43.840305 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 May 17 00:29:43.840310 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 May 17 00:29:43.840315 kernel: pcpu-alloc: [0] 0 1 May 17 00:29:43.840320 kernel: kvm-guest: PV spinlocks enabled May 17 00:29:43.840325 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 17 00:29:43.840331 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:29:43.840339 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:29:43.840344 kernel: random: crng init done May 17 00:29:43.840349 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 17 00:29:43.840354 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:29:43.840359 kernel: Fallback order for Node 0: 0 May 17 00:29:43.840364 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 May 17 00:29:43.840369 kernel: Policy zone: Normal May 17 00:29:43.840374 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:29:43.840393 kernel: software IO TLB: area num 2. May 17 00:29:43.840398 kernel: Memory: 3966204K/4193772K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42872K init, 2320K bss, 227308K reserved, 0K cma-reserved) May 17 00:29:43.840404 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 17 00:29:43.840409 kernel: ftrace: allocating 37948 entries in 149 pages May 17 00:29:43.840414 kernel: ftrace: allocated 149 pages with 4 groups May 17 00:29:43.840419 kernel: Dynamic Preempt: voluntary May 17 00:29:43.840424 kernel: rcu: Preemptible hierarchical RCU implementation. May 17 00:29:43.840430 kernel: rcu: RCU event tracing is enabled. May 17 00:29:43.840435 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 17 00:29:43.840443 kernel: Trampoline variant of Tasks RCU enabled. May 17 00:29:43.840448 kernel: Rude variant of Tasks RCU enabled. May 17 00:29:43.840453 kernel: Tracing variant of Tasks RCU enabled. May 17 00:29:43.840458 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:29:43.840463 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 17 00:29:43.840468 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 17 00:29:43.840474 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 17 00:29:43.840479 kernel: Console: colour VGA+ 80x25 May 17 00:29:43.840484 kernel: printk: console [tty0] enabled May 17 00:29:43.840489 kernel: printk: console [ttyS0] enabled May 17 00:29:43.840496 kernel: ACPI: Core revision 20230628 May 17 00:29:43.840501 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 17 00:29:43.840506 kernel: APIC: Switch to symmetric I/O mode setup May 17 00:29:43.840519 kernel: x2apic enabled May 17 00:29:43.840527 kernel: APIC: Switched APIC routing to: physical x2apic May 17 00:29:43.840532 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 17 00:29:43.840538 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 17 00:29:43.840543 kernel: kvm-guest: setup PV IPIs May 17 00:29:43.840548 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 17 00:29:43.840554 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 17 00:29:43.840559 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) May 17 00:29:43.840564 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 17 00:29:43.840572 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 17 00:29:43.840578 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 17 00:29:43.840583 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 17 00:29:43.840589 kernel: Spectre V2 : Mitigation: Retpolines May 17 00:29:43.840596 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 17 00:29:43.840602 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls May 17 00:29:43.840607 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 17 00:29:43.840612 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 17 00:29:43.840618 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 17 00:29:43.840624 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 17 00:29:43.840629 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 17 00:29:43.840634 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 17 00:29:43.840640 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 17 00:29:43.840648 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 17 00:29:43.840653 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' May 17 00:29:43.840658 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 17 00:29:43.840664 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 May 17 00:29:43.840669 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. May 17 00:29:43.840675 kernel: Freeing SMP alternatives memory: 32K May 17 00:29:43.840680 kernel: pid_max: default: 32768 minimum: 301 May 17 00:29:43.840685 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 17 00:29:43.840693 kernel: landlock: Up and running. May 17 00:29:43.840698 kernel: SELinux: Initializing. May 17 00:29:43.840704 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:29:43.840709 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:29:43.840715 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) May 17 00:29:43.840720 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:29:43.840725 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:29:43.840731 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:29:43.840736 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 17 00:29:43.840744 kernel: ... version: 0 May 17 00:29:43.840749 kernel: ... bit width: 48 May 17 00:29:43.840755 kernel: ... generic registers: 6 May 17 00:29:43.840760 kernel: ... value mask: 0000ffffffffffff May 17 00:29:43.840765 kernel: ... max period: 00007fffffffffff May 17 00:29:43.840770 kernel: ... fixed-purpose events: 0 May 17 00:29:43.840776 kernel: ... event mask: 000000000000003f May 17 00:29:43.840781 kernel: signal: max sigframe size: 3376 May 17 00:29:43.840786 kernel: rcu: Hierarchical SRCU implementation. May 17 00:29:43.840794 kernel: rcu: Max phase no-delay instances is 400. May 17 00:29:43.840800 kernel: smp: Bringing up secondary CPUs ... May 17 00:29:43.840805 kernel: smpboot: x86: Booting SMP configuration: May 17 00:29:43.840810 kernel: .... node #0, CPUs: #1 May 17 00:29:43.840816 kernel: smp: Brought up 1 node, 2 CPUs May 17 00:29:43.840821 kernel: smpboot: Max logical packages: 1 May 17 00:29:43.840826 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) May 17 00:29:43.840832 kernel: devtmpfs: initialized May 17 00:29:43.840837 kernel: x86/mm: Memory block size: 128MB May 17 00:29:43.840842 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:29:43.840850 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 17 00:29:43.840856 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:29:43.840861 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:29:43.840866 kernel: audit: initializing netlink subsys (disabled) May 17 00:29:43.840872 kernel: audit: type=2000 audit(1747441784.024:1): state=initialized audit_enabled=0 res=1 May 17 00:29:43.840877 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:29:43.840882 kernel: thermal_sys: Registered thermal governor 'user_space' May 17 00:29:43.840888 kernel: cpuidle: using governor menu May 17 00:29:43.840893 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:29:43.840901 kernel: dca service started, version 1.12.1 May 17 00:29:43.840916 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 17 00:29:43.840922 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 17 00:29:43.840927 kernel: PCI: Using configuration type 1 for base access May 17 00:29:43.840949 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 17 00:29:43.840970 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:29:43.840975 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 17 00:29:43.840996 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:29:43.841004 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 17 00:29:43.841009 kernel: ACPI: Added _OSI(Module Device) May 17 00:29:43.841015 kernel: ACPI: Added _OSI(Processor Device) May 17 00:29:43.841020 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:29:43.841025 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:29:43.841031 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 17 00:29:43.841036 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 17 00:29:43.841041 kernel: ACPI: Interpreter enabled May 17 00:29:43.841046 kernel: ACPI: PM: (supports S0 S3 S5) May 17 00:29:43.841052 kernel: ACPI: Using IOAPIC for interrupt routing May 17 00:29:43.841060 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 17 00:29:43.841065 kernel: PCI: Using E820 reservations for host bridge windows May 17 00:29:43.841070 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 17 00:29:43.841076 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 17 00:29:43.841221 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:29:43.841326 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 17 00:29:43.841720 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 17 00:29:43.841736 kernel: PCI host bridge to bus 0000:00 May 17 00:29:43.842368 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 17 00:29:43.843495 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 17 00:29:43.843622 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 17 00:29:43.843724 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] May 17 00:29:43.843811 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 17 00:29:43.843896 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] May 17 00:29:43.843987 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 17 00:29:43.844103 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 17 00:29:43.844207 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 17 00:29:43.844303 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] May 17 00:29:43.844413 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] May 17 00:29:43.844511 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] May 17 00:29:43.844609 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 17 00:29:43.844708 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 May 17 00:29:43.844801 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] May 17 00:29:43.844893 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] May 17 00:29:43.844985 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] May 17 00:29:43.845083 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 May 17 00:29:43.845176 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] May 17 00:29:43.845273 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] May 17 00:29:43.845365 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] May 17 00:29:43.849003 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] May 17 00:29:43.849123 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 17 00:29:43.849222 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 17 00:29:43.849325 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 17 00:29:43.849435 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] May 17 00:29:43.849535 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] May 17 00:29:43.849636 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 17 00:29:43.849727 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 17 00:29:43.849736 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 17 00:29:43.849742 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 17 00:29:43.849747 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 17 00:29:43.849753 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 17 00:29:43.849762 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 17 00:29:43.849768 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 17 00:29:43.849773 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 17 00:29:43.849778 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 17 00:29:43.849784 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 17 00:29:43.849789 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 17 00:29:43.849795 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 17 00:29:43.849800 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 17 00:29:43.849806 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 17 00:29:43.849814 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 17 00:29:43.849819 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 17 00:29:43.849825 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 17 00:29:43.849849 kernel: iommu: Default domain type: Translated May 17 00:29:43.849855 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 17 00:29:43.849860 kernel: PCI: Using ACPI for IRQ routing May 17 00:29:43.849866 kernel: PCI: pci_cache_line_size set to 64 bytes May 17 00:29:43.849871 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] May 17 00:29:43.849877 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] May 17 00:29:43.849979 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 17 00:29:43.850072 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 17 00:29:43.850164 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 17 00:29:43.850172 kernel: vgaarb: loaded May 17 00:29:43.850178 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 17 00:29:43.850183 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 17 00:29:43.850188 kernel: clocksource: Switched to clocksource kvm-clock May 17 00:29:43.850194 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:29:43.850199 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:29:43.850208 kernel: pnp: PnP ACPI init May 17 00:29:43.850313 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved May 17 00:29:43.850321 kernel: pnp: PnP ACPI: found 5 devices May 17 00:29:43.850327 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 17 00:29:43.850332 kernel: NET: Registered PF_INET protocol family May 17 00:29:43.850338 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 00:29:43.850343 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 17 00:29:43.850349 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:29:43.850357 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 17 00:29:43.850363 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 17 00:29:43.850369 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 17 00:29:43.850374 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:29:43.850380 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:29:43.850433 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:29:43.850438 kernel: NET: Registered PF_XDP protocol family May 17 00:29:43.850532 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 17 00:29:43.850617 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 17 00:29:43.850707 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 17 00:29:43.850792 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] May 17 00:29:43.850877 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 17 00:29:43.850960 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] May 17 00:29:43.850968 kernel: PCI: CLS 0 bytes, default 64 May 17 00:29:43.850974 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 17 00:29:43.850979 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) May 17 00:29:43.850985 kernel: Initialise system trusted keyrings May 17 00:29:43.850993 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 17 00:29:43.850999 kernel: Key type asymmetric registered May 17 00:29:43.851005 kernel: Asymmetric key parser 'x509' registered May 17 00:29:43.851010 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 17 00:29:43.851015 kernel: io scheduler mq-deadline registered May 17 00:29:43.851021 kernel: io scheduler kyber registered May 17 00:29:43.851026 kernel: io scheduler bfq registered May 17 00:29:43.851031 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 17 00:29:43.851037 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 17 00:29:43.851045 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 17 00:29:43.851050 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:29:43.851056 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 17 00:29:43.851062 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 17 00:29:43.851067 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 17 00:29:43.851072 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 17 00:29:43.851174 kernel: rtc_cmos 00:03: RTC can wake from S4 May 17 00:29:43.851183 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 17 00:29:43.851269 kernel: rtc_cmos 00:03: registered as rtc0 May 17 00:29:43.851359 kernel: rtc_cmos 00:03: setting system clock to 2025-05-17T00:29:43 UTC (1747441783) May 17 00:29:43.851461 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 17 00:29:43.851470 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 17 00:29:43.851476 kernel: NET: Registered PF_INET6 protocol family May 17 00:29:43.851481 kernel: Segment Routing with IPv6 May 17 00:29:43.851487 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:29:43.851492 kernel: NET: Registered PF_PACKET protocol family May 17 00:29:43.851497 kernel: Key type dns_resolver registered May 17 00:29:43.851506 kernel: IPI shorthand broadcast: enabled May 17 00:29:43.851512 kernel: sched_clock: Marking stable (575002970, 163688000)->(774251910, -35560940) May 17 00:29:43.851517 kernel: registered taskstats version 1 May 17 00:29:43.851523 kernel: Loading compiled-in X.509 certificates May 17 00:29:43.851528 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 85b8d1234ceca483cb3defc2030d93f7792663c9' May 17 00:29:43.851534 kernel: Key type .fscrypt registered May 17 00:29:43.851539 kernel: Key type fscrypt-provisioning registered May 17 00:29:43.851544 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:29:43.851550 kernel: ima: Allocated hash algorithm: sha1 May 17 00:29:43.851557 kernel: ima: No architecture policies found May 17 00:29:43.851563 kernel: clk: Disabling unused clocks May 17 00:29:43.851568 kernel: Freeing unused kernel image (initmem) memory: 42872K May 17 00:29:43.851574 kernel: Write protecting the kernel read-only data: 36864k May 17 00:29:43.851579 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 17 00:29:43.851584 kernel: Run /init as init process May 17 00:29:43.851589 kernel: with arguments: May 17 00:29:43.851595 kernel: /init May 17 00:29:43.851600 kernel: with environment: May 17 00:29:43.851608 kernel: HOME=/ May 17 00:29:43.851613 kernel: TERM=linux May 17 00:29:43.851618 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:29:43.851626 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:29:43.851634 systemd[1]: Detected virtualization kvm. May 17 00:29:43.851640 systemd[1]: Detected architecture x86-64. May 17 00:29:43.851646 systemd[1]: Running in initrd. May 17 00:29:43.851651 systemd[1]: No hostname configured, using default hostname. May 17 00:29:43.851659 systemd[1]: Hostname set to . May 17 00:29:43.851665 systemd[1]: Initializing machine ID from random generator. May 17 00:29:43.851671 systemd[1]: Queued start job for default target initrd.target. May 17 00:29:43.851677 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:29:43.851697 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:29:43.851708 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 17 00:29:43.851714 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:29:43.851720 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 17 00:29:43.851727 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 17 00:29:43.851734 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 17 00:29:43.851740 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 17 00:29:43.851746 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:29:43.851754 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:29:43.851760 systemd[1]: Reached target paths.target - Path Units. May 17 00:29:43.851766 systemd[1]: Reached target slices.target - Slice Units. May 17 00:29:43.851772 systemd[1]: Reached target swap.target - Swaps. May 17 00:29:43.851778 systemd[1]: Reached target timers.target - Timer Units. May 17 00:29:43.851784 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:29:43.851790 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:29:43.851796 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 17 00:29:43.851802 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 17 00:29:43.851811 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:29:43.851817 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:29:43.851823 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:29:43.851829 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:29:43.851834 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 17 00:29:43.851841 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:29:43.851847 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 17 00:29:43.851852 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:29:43.851861 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:29:43.851867 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:29:43.851887 systemd-journald[177]: Collecting audit messages is disabled. May 17 00:29:43.851901 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:29:43.851910 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 17 00:29:43.851916 systemd-journald[177]: Journal started May 17 00:29:43.851932 systemd-journald[177]: Runtime Journal (/run/log/journal/a3e6d713fdbb49cfa892d18b1a18b9a6) is 8.0M, max 78.3M, 70.3M free. May 17 00:29:43.842129 systemd-modules-load[178]: Inserted module 'overlay' May 17 00:29:43.895718 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:29:43.895731 kernel: Bridge firewalling registered May 17 00:29:43.895739 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:29:43.868010 systemd-modules-load[178]: Inserted module 'br_netfilter' May 17 00:29:43.896335 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:29:43.897281 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:29:43.898245 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:29:43.899129 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:29:43.906491 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:29:43.907652 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:29:43.912518 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:29:43.923501 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:29:43.924254 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:29:43.924896 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:29:43.941514 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 17 00:29:43.942199 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:29:43.942883 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:29:43.947162 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:29:43.951502 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:29:43.958097 dracut-cmdline[203]: dracut-dracut-053 May 17 00:29:43.960767 dracut-cmdline[203]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:29:43.964513 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:29:43.978358 systemd-resolved[207]: Positive Trust Anchors: May 17 00:29:43.978372 systemd-resolved[207]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:29:43.978408 systemd-resolved[207]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:29:43.981279 systemd-resolved[207]: Defaulting to hostname 'linux'. May 17 00:29:43.982342 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:29:43.982914 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:29:44.027416 kernel: SCSI subsystem initialized May 17 00:29:44.035407 kernel: Loading iSCSI transport class v2.0-870. May 17 00:29:44.044415 kernel: iscsi: registered transport (tcp) May 17 00:29:44.061700 kernel: iscsi: registered transport (qla4xxx) May 17 00:29:44.061738 kernel: QLogic iSCSI HBA Driver May 17 00:29:44.103590 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 17 00:29:44.109534 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 17 00:29:44.131657 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:29:44.131686 kernel: device-mapper: uevent: version 1.0.3 May 17 00:29:44.133393 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 17 00:29:44.170406 kernel: raid6: avx2x4 gen() 33281 MB/s May 17 00:29:44.187403 kernel: raid6: avx2x2 gen() 29412 MB/s May 17 00:29:44.205895 kernel: raid6: avx2x1 gen() 24865 MB/s May 17 00:29:44.205908 kernel: raid6: using algorithm avx2x4 gen() 33281 MB/s May 17 00:29:44.224542 kernel: raid6: .... xor() 4351 MB/s, rmw enabled May 17 00:29:44.224556 kernel: raid6: using avx2x2 recovery algorithm May 17 00:29:44.241408 kernel: xor: automatically using best checksumming function avx May 17 00:29:44.355413 kernel: Btrfs loaded, zoned=no, fsverity=no May 17 00:29:44.366231 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 17 00:29:44.370524 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:29:44.381827 systemd-udevd[393]: Using default interface naming scheme 'v255'. May 17 00:29:44.385144 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:29:44.393520 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 17 00:29:44.405056 dracut-pre-trigger[399]: rd.md=0: removing MD RAID activation May 17 00:29:44.431014 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:29:44.435501 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:29:44.483158 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:29:44.489539 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 17 00:29:44.497880 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 17 00:29:44.499714 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:29:44.500177 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:29:44.500634 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:29:44.510690 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 17 00:29:44.523426 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 17 00:29:44.541420 kernel: scsi host0: Virtio SCSI HBA May 17 00:29:44.545182 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 May 17 00:29:44.558407 kernel: libata version 3.00 loaded. May 17 00:29:44.565399 kernel: cryptd: max_cpu_qlen set to 1000 May 17 00:29:44.576404 kernel: AVX2 version of gcm_enc/dec engaged. May 17 00:29:44.578406 kernel: AES CTR mode by8 optimization enabled May 17 00:29:44.581956 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:29:44.637488 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:29:44.639326 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:29:44.645446 kernel: ahci 0000:00:1f.2: version 3.0 May 17 00:29:44.645631 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 17 00:29:44.639899 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:29:44.674076 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 17 00:29:44.674233 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 17 00:29:44.640006 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:29:44.640717 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:29:44.653009 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:29:44.678895 kernel: scsi host1: ahci May 17 00:29:44.683458 kernel: scsi host2: ahci May 17 00:29:44.692409 kernel: scsi host3: ahci May 17 00:29:44.699355 kernel: scsi host4: ahci May 17 00:29:44.704355 kernel: scsi host5: ahci May 17 00:29:44.704545 kernel: sd 0:0:0:0: Power-on or device reset occurred May 17 00:29:44.704704 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) May 17 00:29:44.707734 kernel: scsi host6: ahci May 17 00:29:44.707901 kernel: sd 0:0:0:0: [sda] Write Protect is off May 17 00:29:44.708045 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 29 May 17 00:29:44.708056 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 May 17 00:29:44.708192 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 29 May 17 00:29:44.710402 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 17 00:29:44.710561 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 29 May 17 00:29:44.716217 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 00:29:44.716243 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 29 May 17 00:29:44.716254 kernel: GPT:9289727 != 167739391 May 17 00:29:44.716263 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 00:29:44.716271 kernel: GPT:9289727 != 167739391 May 17 00:29:44.716279 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 00:29:44.716288 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:29:44.716296 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 29 May 17 00:29:44.721401 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 17 00:29:44.721564 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 29 May 17 00:29:44.775325 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:29:44.782510 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:29:44.800997 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:29:45.038413 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 17 00:29:45.038481 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 17 00:29:45.048838 kernel: ata3: SATA link down (SStatus 0 SControl 300) May 17 00:29:45.048861 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 17 00:29:45.048872 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 17 00:29:45.049404 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 17 00:29:45.085449 kernel: BTRFS: device fsid 7f88d479-6686-439c-8052-b96f0a9d77bc devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (464) May 17 00:29:45.088059 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (467) May 17 00:29:45.094344 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. May 17 00:29:45.099874 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. May 17 00:29:45.107443 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. May 17 00:29:45.108159 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. May 17 00:29:45.114193 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 17 00:29:45.130528 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 17 00:29:45.135009 disk-uuid[568]: Primary Header is updated. May 17 00:29:45.135009 disk-uuid[568]: Secondary Entries is updated. May 17 00:29:45.135009 disk-uuid[568]: Secondary Header is updated. May 17 00:29:45.140459 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:29:45.144411 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:29:46.149120 disk-uuid[569]: The operation has completed successfully. May 17 00:29:46.149812 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:29:46.193310 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:29:46.193464 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 17 00:29:46.202511 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 17 00:29:46.205176 sh[583]: Success May 17 00:29:46.217411 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 17 00:29:46.255784 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 17 00:29:46.267466 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 17 00:29:46.270494 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 17 00:29:46.282769 kernel: BTRFS info (device dm-0): first mount of filesystem 7f88d479-6686-439c-8052-b96f0a9d77bc May 17 00:29:46.282797 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 17 00:29:46.284424 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 17 00:29:46.287141 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 17 00:29:46.287157 kernel: BTRFS info (device dm-0): using free space tree May 17 00:29:46.294407 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 17 00:29:46.295995 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 17 00:29:46.296930 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 17 00:29:46.301490 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 17 00:29:46.303650 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 17 00:29:46.313583 kernel: BTRFS info (device sda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:29:46.313608 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:29:46.315665 kernel: BTRFS info (device sda6): using free space tree May 17 00:29:46.324151 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:29:46.324171 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:29:46.335295 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:29:46.338418 kernel: BTRFS info (device sda6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:29:46.343516 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 17 00:29:46.353566 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 17 00:29:46.416694 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:29:46.429153 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:29:46.429429 ignition[694]: Ignition 2.19.0 May 17 00:29:46.429436 ignition[694]: Stage: fetch-offline May 17 00:29:46.432116 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:29:46.429469 ignition[694]: no configs at "/usr/lib/ignition/base.d" May 17 00:29:46.429478 ignition[694]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 17 00:29:46.429563 ignition[694]: parsed url from cmdline: "" May 17 00:29:46.429566 ignition[694]: no config URL provided May 17 00:29:46.429570 ignition[694]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:29:46.429578 ignition[694]: no config at "/usr/lib/ignition/user.ign" May 17 00:29:46.429583 ignition[694]: failed to fetch config: resource requires networking May 17 00:29:46.429740 ignition[694]: Ignition finished successfully May 17 00:29:46.449353 systemd-networkd[769]: lo: Link UP May 17 00:29:46.449365 systemd-networkd[769]: lo: Gained carrier May 17 00:29:46.451290 systemd-networkd[769]: Enumeration completed May 17 00:29:46.451845 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:29:46.452243 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:29:46.452248 systemd-networkd[769]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:29:46.453336 systemd[1]: Reached target network.target - Network. May 17 00:29:46.454710 systemd-networkd[769]: eth0: Link UP May 17 00:29:46.454714 systemd-networkd[769]: eth0: Gained carrier May 17 00:29:46.454723 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:29:46.464553 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 17 00:29:46.477824 ignition[774]: Ignition 2.19.0 May 17 00:29:46.477838 ignition[774]: Stage: fetch May 17 00:29:46.478013 ignition[774]: no configs at "/usr/lib/ignition/base.d" May 17 00:29:46.478026 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 17 00:29:46.478104 ignition[774]: parsed url from cmdline: "" May 17 00:29:46.478108 ignition[774]: no config URL provided May 17 00:29:46.478115 ignition[774]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:29:46.478124 ignition[774]: no config at "/usr/lib/ignition/user.ign" May 17 00:29:46.478145 ignition[774]: PUT http://169.254.169.254/v1/token: attempt #1 May 17 00:29:46.478322 ignition[774]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable May 17 00:29:46.679294 ignition[774]: PUT http://169.254.169.254/v1/token: attempt #2 May 17 00:29:46.679557 ignition[774]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable May 17 00:29:46.876467 systemd-networkd[769]: eth0: DHCPv4 address 172.232.0.241/24, gateway 172.232.0.1 acquired from 23.213.14.22 May 17 00:29:47.079848 ignition[774]: PUT http://169.254.169.254/v1/token: attempt #3 May 17 00:29:47.169376 ignition[774]: PUT result: OK May 17 00:29:47.169446 ignition[774]: GET http://169.254.169.254/v1/user-data: attempt #1 May 17 00:29:47.281275 ignition[774]: GET result: OK May 17 00:29:47.281449 ignition[774]: parsing config with SHA512: d25b9c58a24b87f13faf59ad4f1043755244e6b8d0cb0e3ad8bb369747ebccbb2bb51e1a3a83de3759ed717cb5fd52892a07fe4730c684ce738b52fad307ee4a May 17 00:29:47.284687 unknown[774]: fetched base config from "system" May 17 00:29:47.285294 unknown[774]: fetched base config from "system" May 17 00:29:47.285306 unknown[774]: fetched user config from "akamai" May 17 00:29:47.285577 ignition[774]: fetch: fetch complete May 17 00:29:47.285582 ignition[774]: fetch: fetch passed May 17 00:29:47.285626 ignition[774]: Ignition finished successfully May 17 00:29:47.288201 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 17 00:29:47.295522 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 17 00:29:47.316719 ignition[782]: Ignition 2.19.0 May 17 00:29:47.316737 ignition[782]: Stage: kargs May 17 00:29:47.316905 ignition[782]: no configs at "/usr/lib/ignition/base.d" May 17 00:29:47.320531 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 17 00:29:47.316916 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 17 00:29:47.317593 ignition[782]: kargs: kargs passed May 17 00:29:47.317643 ignition[782]: Ignition finished successfully May 17 00:29:47.326551 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 17 00:29:47.347695 ignition[788]: Ignition 2.19.0 May 17 00:29:47.347711 ignition[788]: Stage: disks May 17 00:29:47.347849 ignition[788]: no configs at "/usr/lib/ignition/base.d" May 17 00:29:47.347862 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 17 00:29:47.349828 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 17 00:29:47.348453 ignition[788]: disks: disks passed May 17 00:29:47.348492 ignition[788]: Ignition finished successfully May 17 00:29:47.351556 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 17 00:29:47.355987 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 17 00:29:47.357019 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:29:47.358169 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:29:47.359429 systemd[1]: Reached target basic.target - Basic System. May 17 00:29:47.367554 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 17 00:29:47.381949 systemd-fsck[796]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 17 00:29:47.384542 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 17 00:29:47.390507 systemd[1]: Mounting sysroot.mount - /sysroot... May 17 00:29:47.473412 kernel: EXT4-fs (sda9): mounted filesystem 278698a4-82b6-49b4-b6df-f7999ed4e35e r/w with ordered data mode. Quota mode: none. May 17 00:29:47.473718 systemd[1]: Mounted sysroot.mount - /sysroot. May 17 00:29:47.474921 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 17 00:29:47.480455 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:29:47.484590 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 17 00:29:47.485525 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 17 00:29:47.485627 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:29:47.485661 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:29:47.494434 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (804) May 17 00:29:47.499413 kernel: BTRFS info (device sda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:29:47.499439 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:29:47.499451 kernel: BTRFS info (device sda6): using free space tree May 17 00:29:47.498959 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 17 00:29:47.505526 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 17 00:29:47.509555 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:29:47.509578 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:29:47.512554 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:29:47.548042 initrd-setup-root[828]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:29:47.553102 initrd-setup-root[835]: cut: /sysroot/etc/group: No such file or directory May 17 00:29:47.558236 initrd-setup-root[842]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:29:47.563334 initrd-setup-root[849]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:29:47.656319 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 17 00:29:47.666486 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 17 00:29:47.669541 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 17 00:29:47.675533 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 17 00:29:47.676962 kernel: BTRFS info (device sda6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:29:47.686897 systemd-networkd[769]: eth0: Gained IPv6LL May 17 00:29:47.705238 ignition[916]: INFO : Ignition 2.19.0 May 17 00:29:47.706301 ignition[916]: INFO : Stage: mount May 17 00:29:47.706301 ignition[916]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:29:47.707683 ignition[916]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 17 00:29:47.707544 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 17 00:29:47.711196 ignition[916]: INFO : mount: mount passed May 17 00:29:47.711196 ignition[916]: INFO : Ignition finished successfully May 17 00:29:47.710893 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 17 00:29:47.716565 systemd[1]: Starting ignition-files.service - Ignition (files)... May 17 00:29:48.478634 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:29:48.491434 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (929) May 17 00:29:48.491517 kernel: BTRFS info (device sda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:29:48.493898 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:29:48.495612 kernel: BTRFS info (device sda6): using free space tree May 17 00:29:48.502189 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:29:48.502261 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:29:48.504758 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:29:48.521826 ignition[946]: INFO : Ignition 2.19.0 May 17 00:29:48.521826 ignition[946]: INFO : Stage: files May 17 00:29:48.523032 ignition[946]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:29:48.523032 ignition[946]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 17 00:29:48.523032 ignition[946]: DEBUG : files: compiled without relabeling support, skipping May 17 00:29:48.525026 ignition[946]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:29:48.525026 ignition[946]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:29:48.527225 ignition[946]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:29:48.528123 ignition[946]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:29:48.529200 unknown[946]: wrote ssh authorized keys file for user: core May 17 00:29:48.530021 ignition[946]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:29:48.530827 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 17 00:29:48.530827 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 17 00:29:48.807178 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 17 00:29:49.152184 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 17 00:29:49.152184 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 17 00:29:49.155009 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:29:49.155009 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:29:49.155009 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:29:49.155009 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:29:49.155009 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:29:49.155009 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:29:49.155009 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:29:49.155009 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:29:49.155009 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:29:49.155009 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:29:49.155009 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:29:49.155009 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:29:49.155009 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 May 17 00:29:49.883440 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 17 00:29:50.138312 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:29:50.138312 ignition[946]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 17 00:29:50.140829 ignition[946]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:29:50.141807 ignition[946]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:29:50.141807 ignition[946]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 17 00:29:50.141807 ignition[946]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 17 00:29:50.141807 ignition[946]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 17 00:29:50.141807 ignition[946]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 17 00:29:50.141807 ignition[946]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 17 00:29:50.141807 ignition[946]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" May 17 00:29:50.141807 ignition[946]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:29:50.141807 ignition[946]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:29:50.157464 ignition[946]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:29:50.157464 ignition[946]: INFO : files: files passed May 17 00:29:50.157464 ignition[946]: INFO : Ignition finished successfully May 17 00:29:50.145318 systemd[1]: Finished ignition-files.service - Ignition (files). May 17 00:29:50.155540 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 17 00:29:50.160655 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 17 00:29:50.162202 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:29:50.162301 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 17 00:29:50.172459 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:29:50.172459 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 17 00:29:50.174984 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:29:50.176572 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:29:50.178273 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 17 00:29:50.183519 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 17 00:29:50.206691 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:29:50.206801 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 17 00:29:50.208023 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 17 00:29:50.209065 systemd[1]: Reached target initrd.target - Initrd Default Target. May 17 00:29:50.210256 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 17 00:29:50.219508 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 17 00:29:50.230211 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:29:50.235511 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 17 00:29:50.243327 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 17 00:29:50.244011 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:29:50.245261 systemd[1]: Stopped target timers.target - Timer Units. May 17 00:29:50.246443 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:29:50.246537 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:29:50.248573 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 17 00:29:50.249312 systemd[1]: Stopped target basic.target - Basic System. May 17 00:29:50.250326 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 17 00:29:50.251345 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:29:50.252552 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 17 00:29:50.253770 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 17 00:29:50.254960 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:29:50.256167 systemd[1]: Stopped target sysinit.target - System Initialization. May 17 00:29:50.257373 systemd[1]: Stopped target local-fs.target - Local File Systems. May 17 00:29:50.258531 systemd[1]: Stopped target swap.target - Swaps. May 17 00:29:50.259511 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:29:50.259605 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 17 00:29:50.261564 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 17 00:29:50.262318 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:29:50.263507 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 17 00:29:50.263613 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:29:50.264754 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:29:50.264846 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 17 00:29:50.266313 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:29:50.266443 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:29:50.267187 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:29:50.267317 systemd[1]: Stopped ignition-files.service - Ignition (files). May 17 00:29:50.279795 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 17 00:29:50.282562 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 17 00:29:50.283156 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:29:50.283300 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:29:50.285164 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:29:50.285299 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:29:50.294275 ignition[998]: INFO : Ignition 2.19.0 May 17 00:29:50.294275 ignition[998]: INFO : Stage: umount May 17 00:29:50.299436 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:29:50.299436 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 17 00:29:50.299436 ignition[998]: INFO : umount: umount passed May 17 00:29:50.299436 ignition[998]: INFO : Ignition finished successfully May 17 00:29:50.297672 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:29:50.297785 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 17 00:29:50.300213 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:29:50.300422 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 17 00:29:50.303427 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:29:50.303479 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 17 00:29:50.304427 systemd[1]: ignition-fetch.service: Deactivated successfully. May 17 00:29:50.304484 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 17 00:29:50.305587 systemd[1]: Stopped target network.target - Network. May 17 00:29:50.306081 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:29:50.306131 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:29:50.306771 systemd[1]: Stopped target paths.target - Path Units. May 17 00:29:50.307315 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:29:50.312472 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:29:50.313095 systemd[1]: Stopped target slices.target - Slice Units. May 17 00:29:50.318342 systemd[1]: Stopped target sockets.target - Socket Units. May 17 00:29:50.318877 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:29:50.318924 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:29:50.320485 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:29:50.320532 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:29:50.328049 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:29:50.328101 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 17 00:29:50.330076 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 17 00:29:50.330124 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 17 00:29:50.330914 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 17 00:29:50.333146 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 17 00:29:50.335219 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:29:50.336159 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:29:50.336265 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 17 00:29:50.337581 systemd-networkd[769]: eth0: DHCPv6 lease lost May 17 00:29:50.341878 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:29:50.342000 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 17 00:29:50.347601 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:29:50.347750 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 17 00:29:50.353162 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:29:50.353229 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 17 00:29:50.359480 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 17 00:29:50.360596 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:29:50.361180 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:29:50.363084 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:29:50.363136 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 00:29:50.363777 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:29:50.363827 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 17 00:29:50.364833 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 17 00:29:50.364879 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:29:50.366148 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:29:50.370996 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:29:50.371101 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 17 00:29:50.378632 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:29:50.379304 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 17 00:29:50.380978 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:29:50.381115 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 17 00:29:50.382071 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:29:50.382235 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:29:50.383679 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:29:50.383750 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 17 00:29:50.384742 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:29:50.384784 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:29:50.385778 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:29:50.385828 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 17 00:29:50.387424 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:29:50.387472 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 17 00:29:50.388583 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:29:50.388636 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:29:50.395549 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 17 00:29:50.396985 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:29:50.397043 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:29:50.397661 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 17 00:29:50.397709 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:29:50.398292 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:29:50.398336 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:29:50.399585 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:29:50.399631 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:29:50.403598 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:29:50.403725 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 17 00:29:50.405257 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 17 00:29:50.416526 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 17 00:29:50.421934 systemd[1]: Switching root. May 17 00:29:50.452681 systemd-journald[177]: Journal stopped May 17 00:29:43.839856 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri May 16 22:44:56 -00 2025 May 17 00:29:43.839873 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:29:43.839880 kernel: BIOS-provided physical RAM map: May 17 00:29:43.839885 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable May 17 00:29:43.839889 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved May 17 00:29:43.839896 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 17 00:29:43.839901 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable May 17 00:29:43.839906 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved May 17 00:29:43.839911 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 17 00:29:43.839915 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 17 00:29:43.839920 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 17 00:29:43.839924 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 17 00:29:43.839929 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable May 17 00:29:43.839936 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 17 00:29:43.839941 kernel: NX (Execute Disable) protection: active May 17 00:29:43.839946 kernel: APIC: Static calls initialized May 17 00:29:43.839951 kernel: SMBIOS 2.8 present. May 17 00:29:43.839955 kernel: DMI: Linode Compute Instance, BIOS Not Specified May 17 00:29:43.839960 kernel: Hypervisor detected: KVM May 17 00:29:43.839967 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 17 00:29:43.839972 kernel: kvm-clock: using sched offset of 4091612550 cycles May 17 00:29:43.839977 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 17 00:29:43.839982 kernel: tsc: Detected 2000.000 MHz processor May 17 00:29:43.839987 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 17 00:29:43.839992 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 17 00:29:43.839997 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 May 17 00:29:43.840002 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 17 00:29:43.840007 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 17 00:29:43.840015 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 May 17 00:29:43.840019 kernel: Using GB pages for direct mapping May 17 00:29:43.840024 kernel: ACPI: Early table checksum verification disabled May 17 00:29:43.840029 kernel: ACPI: RSDP 0x00000000000F51B0 000014 (v00 BOCHS ) May 17 00:29:43.840034 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:29:43.840039 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:29:43.840044 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:29:43.840049 kernel: ACPI: FACS 0x000000007FFE0000 000040 May 17 00:29:43.840053 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:29:43.840061 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:29:43.840066 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:29:43.840071 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:29:43.840079 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] May 17 00:29:43.840084 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] May 17 00:29:43.840089 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] May 17 00:29:43.840097 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] May 17 00:29:43.840102 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] May 17 00:29:43.840107 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] May 17 00:29:43.840112 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] May 17 00:29:43.840117 kernel: No NUMA configuration found May 17 00:29:43.840122 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] May 17 00:29:43.840127 kernel: NODE_DATA(0) allocated [mem 0x17fff8000-0x17fffdfff] May 17 00:29:43.840132 kernel: Zone ranges: May 17 00:29:43.840141 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 17 00:29:43.840146 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 17 00:29:43.840151 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] May 17 00:29:43.840156 kernel: Movable zone start for each node May 17 00:29:43.840161 kernel: Early memory node ranges May 17 00:29:43.840166 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 17 00:29:43.840171 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] May 17 00:29:43.840176 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] May 17 00:29:43.840181 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] May 17 00:29:43.840186 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 17 00:29:43.840193 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 17 00:29:43.840198 kernel: On node 0, zone Normal: 35 pages in unavailable ranges May 17 00:29:43.840204 kernel: ACPI: PM-Timer IO Port: 0x608 May 17 00:29:43.840209 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 17 00:29:43.840214 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 17 00:29:43.840219 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 17 00:29:43.840224 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 17 00:29:43.840229 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 17 00:29:43.840234 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 17 00:29:43.840242 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 17 00:29:43.840247 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 17 00:29:43.840252 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 17 00:29:43.840257 kernel: TSC deadline timer available May 17 00:29:43.840262 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 17 00:29:43.840267 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 17 00:29:43.840272 kernel: kvm-guest: KVM setup pv remote TLB flush May 17 00:29:43.840277 kernel: kvm-guest: setup PV sched yield May 17 00:29:43.840282 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 17 00:29:43.840290 kernel: Booting paravirtualized kernel on KVM May 17 00:29:43.840295 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 17 00:29:43.840300 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 17 00:29:43.840305 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 May 17 00:29:43.840310 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 May 17 00:29:43.840315 kernel: pcpu-alloc: [0] 0 1 May 17 00:29:43.840320 kernel: kvm-guest: PV spinlocks enabled May 17 00:29:43.840325 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 17 00:29:43.840331 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:29:43.840339 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:29:43.840344 kernel: random: crng init done May 17 00:29:43.840349 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 17 00:29:43.840354 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:29:43.840359 kernel: Fallback order for Node 0: 0 May 17 00:29:43.840364 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 May 17 00:29:43.840369 kernel: Policy zone: Normal May 17 00:29:43.840374 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:29:43.840393 kernel: software IO TLB: area num 2. May 17 00:29:43.840398 kernel: Memory: 3966204K/4193772K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42872K init, 2320K bss, 227308K reserved, 0K cma-reserved) May 17 00:29:43.840404 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 17 00:29:43.840409 kernel: ftrace: allocating 37948 entries in 149 pages May 17 00:29:43.840414 kernel: ftrace: allocated 149 pages with 4 groups May 17 00:29:43.840419 kernel: Dynamic Preempt: voluntary May 17 00:29:43.840424 kernel: rcu: Preemptible hierarchical RCU implementation. May 17 00:29:43.840430 kernel: rcu: RCU event tracing is enabled. May 17 00:29:43.840435 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 17 00:29:43.840443 kernel: Trampoline variant of Tasks RCU enabled. May 17 00:29:43.840448 kernel: Rude variant of Tasks RCU enabled. May 17 00:29:43.840453 kernel: Tracing variant of Tasks RCU enabled. May 17 00:29:43.840458 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:29:43.840463 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 17 00:29:43.840468 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 17 00:29:43.840474 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 17 00:29:43.840479 kernel: Console: colour VGA+ 80x25 May 17 00:29:43.840484 kernel: printk: console [tty0] enabled May 17 00:29:43.840489 kernel: printk: console [ttyS0] enabled May 17 00:29:43.840496 kernel: ACPI: Core revision 20230628 May 17 00:29:43.840501 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 17 00:29:43.840506 kernel: APIC: Switch to symmetric I/O mode setup May 17 00:29:43.840519 kernel: x2apic enabled May 17 00:29:43.840527 kernel: APIC: Switched APIC routing to: physical x2apic May 17 00:29:43.840532 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 17 00:29:43.840538 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 17 00:29:43.840543 kernel: kvm-guest: setup PV IPIs May 17 00:29:43.840548 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 17 00:29:43.840554 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 17 00:29:43.840559 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) May 17 00:29:43.840564 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 17 00:29:43.840572 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 17 00:29:43.840578 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 17 00:29:43.840583 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 17 00:29:43.840589 kernel: Spectre V2 : Mitigation: Retpolines May 17 00:29:43.840596 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 17 00:29:43.840602 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls May 17 00:29:43.840607 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 17 00:29:43.840612 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 17 00:29:43.840618 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 17 00:29:43.840624 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 17 00:29:43.840629 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 17 00:29:43.840634 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 17 00:29:43.840640 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 17 00:29:43.840648 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 17 00:29:43.840653 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' May 17 00:29:43.840658 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 17 00:29:43.840664 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 May 17 00:29:43.840669 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. May 17 00:29:43.840675 kernel: Freeing SMP alternatives memory: 32K May 17 00:29:43.840680 kernel: pid_max: default: 32768 minimum: 301 May 17 00:29:43.840685 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 17 00:29:43.840693 kernel: landlock: Up and running. May 17 00:29:43.840698 kernel: SELinux: Initializing. May 17 00:29:43.840704 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:29:43.840709 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:29:43.840715 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) May 17 00:29:43.840720 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:29:43.840725 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:29:43.840731 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:29:43.840736 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 17 00:29:43.840744 kernel: ... version: 0 May 17 00:29:43.840749 kernel: ... bit width: 48 May 17 00:29:43.840755 kernel: ... generic registers: 6 May 17 00:29:43.840760 kernel: ... value mask: 0000ffffffffffff May 17 00:29:43.840765 kernel: ... max period: 00007fffffffffff May 17 00:29:43.840770 kernel: ... fixed-purpose events: 0 May 17 00:29:43.840776 kernel: ... event mask: 000000000000003f May 17 00:29:43.840781 kernel: signal: max sigframe size: 3376 May 17 00:29:43.840786 kernel: rcu: Hierarchical SRCU implementation. May 17 00:29:43.840794 kernel: rcu: Max phase no-delay instances is 400. May 17 00:29:43.840800 kernel: smp: Bringing up secondary CPUs ... May 17 00:29:43.840805 kernel: smpboot: x86: Booting SMP configuration: May 17 00:29:43.840810 kernel: .... node #0, CPUs: #1 May 17 00:29:43.840816 kernel: smp: Brought up 1 node, 2 CPUs May 17 00:29:43.840821 kernel: smpboot: Max logical packages: 1 May 17 00:29:43.840826 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) May 17 00:29:43.840832 kernel: devtmpfs: initialized May 17 00:29:43.840837 kernel: x86/mm: Memory block size: 128MB May 17 00:29:43.840842 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:29:43.840850 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 17 00:29:43.840856 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:29:43.840861 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:29:43.840866 kernel: audit: initializing netlink subsys (disabled) May 17 00:29:43.840872 kernel: audit: type=2000 audit(1747441784.024:1): state=initialized audit_enabled=0 res=1 May 17 00:29:43.840877 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:29:43.840882 kernel: thermal_sys: Registered thermal governor 'user_space' May 17 00:29:43.840888 kernel: cpuidle: using governor menu May 17 00:29:43.840893 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:29:43.840901 kernel: dca service started, version 1.12.1 May 17 00:29:43.840916 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 17 00:29:43.840922 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 17 00:29:43.840927 kernel: PCI: Using configuration type 1 for base access May 17 00:29:43.840949 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 17 00:29:43.840970 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:29:43.840975 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 17 00:29:43.840996 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:29:43.841004 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 17 00:29:43.841009 kernel: ACPI: Added _OSI(Module Device) May 17 00:29:43.841015 kernel: ACPI: Added _OSI(Processor Device) May 17 00:29:43.841020 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:29:43.841025 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:29:43.841031 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 17 00:29:43.841036 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 17 00:29:43.841041 kernel: ACPI: Interpreter enabled May 17 00:29:43.841046 kernel: ACPI: PM: (supports S0 S3 S5) May 17 00:29:43.841052 kernel: ACPI: Using IOAPIC for interrupt routing May 17 00:29:43.841060 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 17 00:29:43.841065 kernel: PCI: Using E820 reservations for host bridge windows May 17 00:29:43.841070 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 17 00:29:43.841076 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 17 00:29:43.841221 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:29:43.841326 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 17 00:29:43.841720 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 17 00:29:43.841736 kernel: PCI host bridge to bus 0000:00 May 17 00:29:43.842368 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 17 00:29:43.843495 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 17 00:29:43.843622 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 17 00:29:43.843724 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] May 17 00:29:43.843811 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 17 00:29:43.843896 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] May 17 00:29:43.843987 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 17 00:29:43.844103 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 17 00:29:43.844207 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 17 00:29:43.844303 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] May 17 00:29:43.844413 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] May 17 00:29:43.844511 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] May 17 00:29:43.844609 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 17 00:29:43.844708 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 May 17 00:29:43.844801 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] May 17 00:29:43.844893 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] May 17 00:29:43.844985 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] May 17 00:29:43.845083 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 May 17 00:29:43.845176 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] May 17 00:29:43.845273 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] May 17 00:29:43.845365 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] May 17 00:29:43.849003 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] May 17 00:29:43.849123 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 17 00:29:43.849222 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 17 00:29:43.849325 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 17 00:29:43.849435 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] May 17 00:29:43.849535 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] May 17 00:29:43.849636 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 17 00:29:43.849727 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 17 00:29:43.849736 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 17 00:29:43.849742 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 17 00:29:43.849747 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 17 00:29:43.849753 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 17 00:29:43.849762 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 17 00:29:43.849768 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 17 00:29:43.849773 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 17 00:29:43.849778 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 17 00:29:43.849784 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 17 00:29:43.849789 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 17 00:29:43.849795 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 17 00:29:43.849800 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 17 00:29:43.849806 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 17 00:29:43.849814 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 17 00:29:43.849819 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 17 00:29:43.849825 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 17 00:29:43.849849 kernel: iommu: Default domain type: Translated May 17 00:29:43.849855 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 17 00:29:43.849860 kernel: PCI: Using ACPI for IRQ routing May 17 00:29:43.849866 kernel: PCI: pci_cache_line_size set to 64 bytes May 17 00:29:43.849871 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] May 17 00:29:43.849877 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] May 17 00:29:43.849979 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 17 00:29:43.850072 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 17 00:29:43.850164 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 17 00:29:43.850172 kernel: vgaarb: loaded May 17 00:29:43.850178 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 17 00:29:43.850183 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 17 00:29:43.850188 kernel: clocksource: Switched to clocksource kvm-clock May 17 00:29:43.850194 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:29:43.850199 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:29:43.850208 kernel: pnp: PnP ACPI init May 17 00:29:43.850313 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved May 17 00:29:43.850321 kernel: pnp: PnP ACPI: found 5 devices May 17 00:29:43.850327 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 17 00:29:43.850332 kernel: NET: Registered PF_INET protocol family May 17 00:29:43.850338 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 00:29:43.850343 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 17 00:29:43.850349 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:29:43.850357 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 17 00:29:43.850363 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 17 00:29:43.850369 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 17 00:29:43.850374 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:29:43.850380 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:29:43.850433 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:29:43.850438 kernel: NET: Registered PF_XDP protocol family May 17 00:29:43.850532 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 17 00:29:43.850617 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 17 00:29:43.850707 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 17 00:29:43.850792 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] May 17 00:29:43.850877 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 17 00:29:43.850960 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] May 17 00:29:43.850968 kernel: PCI: CLS 0 bytes, default 64 May 17 00:29:43.850974 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 17 00:29:43.850979 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) May 17 00:29:43.850985 kernel: Initialise system trusted keyrings May 17 00:29:43.850993 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 17 00:29:43.850999 kernel: Key type asymmetric registered May 17 00:29:43.851005 kernel: Asymmetric key parser 'x509' registered May 17 00:29:43.851010 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 17 00:29:43.851015 kernel: io scheduler mq-deadline registered May 17 00:29:43.851021 kernel: io scheduler kyber registered May 17 00:29:43.851026 kernel: io scheduler bfq registered May 17 00:29:43.851031 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 17 00:29:43.851037 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 17 00:29:43.851045 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 17 00:29:43.851050 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:29:43.851056 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 17 00:29:43.851062 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 17 00:29:43.851067 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 17 00:29:43.851072 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 17 00:29:43.851174 kernel: rtc_cmos 00:03: RTC can wake from S4 May 17 00:29:43.851183 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 17 00:29:43.851269 kernel: rtc_cmos 00:03: registered as rtc0 May 17 00:29:43.851359 kernel: rtc_cmos 00:03: setting system clock to 2025-05-17T00:29:43 UTC (1747441783) May 17 00:29:43.851461 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 17 00:29:43.851470 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 17 00:29:43.851476 kernel: NET: Registered PF_INET6 protocol family May 17 00:29:43.851481 kernel: Segment Routing with IPv6 May 17 00:29:43.851487 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:29:43.851492 kernel: NET: Registered PF_PACKET protocol family May 17 00:29:43.851497 kernel: Key type dns_resolver registered May 17 00:29:43.851506 kernel: IPI shorthand broadcast: enabled May 17 00:29:43.851512 kernel: sched_clock: Marking stable (575002970, 163688000)->(774251910, -35560940) May 17 00:29:43.851517 kernel: registered taskstats version 1 May 17 00:29:43.851523 kernel: Loading compiled-in X.509 certificates May 17 00:29:43.851528 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 85b8d1234ceca483cb3defc2030d93f7792663c9' May 17 00:29:43.851534 kernel: Key type .fscrypt registered May 17 00:29:43.851539 kernel: Key type fscrypt-provisioning registered May 17 00:29:43.851544 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:29:43.851550 kernel: ima: Allocated hash algorithm: sha1 May 17 00:29:43.851557 kernel: ima: No architecture policies found May 17 00:29:43.851563 kernel: clk: Disabling unused clocks May 17 00:29:43.851568 kernel: Freeing unused kernel image (initmem) memory: 42872K May 17 00:29:43.851574 kernel: Write protecting the kernel read-only data: 36864k May 17 00:29:43.851579 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 17 00:29:43.851584 kernel: Run /init as init process May 17 00:29:43.851589 kernel: with arguments: May 17 00:29:43.851595 kernel: /init May 17 00:29:43.851600 kernel: with environment: May 17 00:29:43.851608 kernel: HOME=/ May 17 00:29:43.851613 kernel: TERM=linux May 17 00:29:43.851618 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:29:43.851626 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:29:43.851634 systemd[1]: Detected virtualization kvm. May 17 00:29:43.851640 systemd[1]: Detected architecture x86-64. May 17 00:29:43.851646 systemd[1]: Running in initrd. May 17 00:29:43.851651 systemd[1]: No hostname configured, using default hostname. May 17 00:29:43.851659 systemd[1]: Hostname set to . May 17 00:29:43.851665 systemd[1]: Initializing machine ID from random generator. May 17 00:29:43.851671 systemd[1]: Queued start job for default target initrd.target. May 17 00:29:43.851677 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:29:43.851697 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:29:43.851708 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 17 00:29:43.851714 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:29:43.851720 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 17 00:29:43.851727 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 17 00:29:43.851734 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 17 00:29:43.851740 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 17 00:29:43.851746 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:29:43.851754 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:29:43.851760 systemd[1]: Reached target paths.target - Path Units. May 17 00:29:43.851766 systemd[1]: Reached target slices.target - Slice Units. May 17 00:29:43.851772 systemd[1]: Reached target swap.target - Swaps. May 17 00:29:43.851778 systemd[1]: Reached target timers.target - Timer Units. May 17 00:29:43.851784 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:29:43.851790 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:29:43.851796 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 17 00:29:43.851802 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 17 00:29:43.851811 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:29:43.851817 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:29:43.851823 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:29:43.851829 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:29:43.851834 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 17 00:29:43.851841 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:29:43.851847 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 17 00:29:43.851852 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:29:43.851861 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:29:43.851867 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:29:43.851887 systemd-journald[177]: Collecting audit messages is disabled. May 17 00:29:43.851901 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:29:43.851910 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 17 00:29:43.851916 systemd-journald[177]: Journal started May 17 00:29:43.851932 systemd-journald[177]: Runtime Journal (/run/log/journal/a3e6d713fdbb49cfa892d18b1a18b9a6) is 8.0M, max 78.3M, 70.3M free. May 17 00:29:43.842129 systemd-modules-load[178]: Inserted module 'overlay' May 17 00:29:43.895718 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:29:43.895731 kernel: Bridge firewalling registered May 17 00:29:43.895739 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:29:43.868010 systemd-modules-load[178]: Inserted module 'br_netfilter' May 17 00:29:43.896335 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:29:43.897281 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:29:43.898245 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:29:43.899129 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:29:43.906491 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:29:43.907652 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:29:43.912518 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:29:43.923501 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:29:43.924254 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:29:43.924896 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:29:43.941514 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 17 00:29:43.942199 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:29:43.942883 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:29:43.947162 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:29:43.951502 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:29:43.958097 dracut-cmdline[203]: dracut-dracut-053 May 17 00:29:43.960767 dracut-cmdline[203]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:29:43.964513 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:29:43.978358 systemd-resolved[207]: Positive Trust Anchors: May 17 00:29:43.978372 systemd-resolved[207]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:29:43.978408 systemd-resolved[207]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:29:43.981279 systemd-resolved[207]: Defaulting to hostname 'linux'. May 17 00:29:43.982342 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:29:43.982914 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:29:44.027416 kernel: SCSI subsystem initialized May 17 00:29:44.035407 kernel: Loading iSCSI transport class v2.0-870. May 17 00:29:44.044415 kernel: iscsi: registered transport (tcp) May 17 00:29:44.061700 kernel: iscsi: registered transport (qla4xxx) May 17 00:29:44.061738 kernel: QLogic iSCSI HBA Driver May 17 00:29:44.103590 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 17 00:29:44.109534 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 17 00:29:44.131657 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:29:44.131686 kernel: device-mapper: uevent: version 1.0.3 May 17 00:29:44.133393 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 17 00:29:44.170406 kernel: raid6: avx2x4 gen() 33281 MB/s May 17 00:29:44.187403 kernel: raid6: avx2x2 gen() 29412 MB/s May 17 00:29:44.205895 kernel: raid6: avx2x1 gen() 24865 MB/s May 17 00:29:44.205908 kernel: raid6: using algorithm avx2x4 gen() 33281 MB/s May 17 00:29:44.224542 kernel: raid6: .... xor() 4351 MB/s, rmw enabled May 17 00:29:44.224556 kernel: raid6: using avx2x2 recovery algorithm May 17 00:29:44.241408 kernel: xor: automatically using best checksumming function avx May 17 00:29:44.355413 kernel: Btrfs loaded, zoned=no, fsverity=no May 17 00:29:44.366231 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 17 00:29:44.370524 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:29:44.381827 systemd-udevd[393]: Using default interface naming scheme 'v255'. May 17 00:29:44.385144 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:29:44.393520 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 17 00:29:44.405056 dracut-pre-trigger[399]: rd.md=0: removing MD RAID activation May 17 00:29:44.431014 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:29:44.435501 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:29:44.483158 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:29:44.489539 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 17 00:29:44.497880 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 17 00:29:44.499714 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:29:44.500177 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:29:44.500634 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:29:44.510690 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 17 00:29:44.523426 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 17 00:29:44.541420 kernel: scsi host0: Virtio SCSI HBA May 17 00:29:44.545182 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 May 17 00:29:44.558407 kernel: libata version 3.00 loaded. May 17 00:29:44.565399 kernel: cryptd: max_cpu_qlen set to 1000 May 17 00:29:44.576404 kernel: AVX2 version of gcm_enc/dec engaged. May 17 00:29:44.578406 kernel: AES CTR mode by8 optimization enabled May 17 00:29:44.581956 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:29:44.637488 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:29:44.639326 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:29:44.645446 kernel: ahci 0000:00:1f.2: version 3.0 May 17 00:29:44.645631 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 17 00:29:44.639899 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:29:44.674076 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 17 00:29:44.674233 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 17 00:29:44.640006 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:29:44.640717 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:29:44.653009 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:29:44.678895 kernel: scsi host1: ahci May 17 00:29:44.683458 kernel: scsi host2: ahci May 17 00:29:44.692409 kernel: scsi host3: ahci May 17 00:29:44.699355 kernel: scsi host4: ahci May 17 00:29:44.704355 kernel: scsi host5: ahci May 17 00:29:44.704545 kernel: sd 0:0:0:0: Power-on or device reset occurred May 17 00:29:44.704704 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) May 17 00:29:44.707734 kernel: scsi host6: ahci May 17 00:29:44.707901 kernel: sd 0:0:0:0: [sda] Write Protect is off May 17 00:29:44.708045 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 29 May 17 00:29:44.708056 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 May 17 00:29:44.708192 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 29 May 17 00:29:44.710402 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 17 00:29:44.710561 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 29 May 17 00:29:44.716217 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 00:29:44.716243 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 29 May 17 00:29:44.716254 kernel: GPT:9289727 != 167739391 May 17 00:29:44.716263 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 00:29:44.716271 kernel: GPT:9289727 != 167739391 May 17 00:29:44.716279 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 00:29:44.716288 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:29:44.716296 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 29 May 17 00:29:44.721401 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 17 00:29:44.721564 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 29 May 17 00:29:44.775325 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:29:44.782510 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:29:44.800997 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:29:45.038413 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 17 00:29:45.038481 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 17 00:29:45.048838 kernel: ata3: SATA link down (SStatus 0 SControl 300) May 17 00:29:45.048861 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 17 00:29:45.048872 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 17 00:29:45.049404 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 17 00:29:45.085449 kernel: BTRFS: device fsid 7f88d479-6686-439c-8052-b96f0a9d77bc devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (464) May 17 00:29:45.088059 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (467) May 17 00:29:45.094344 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. May 17 00:29:45.099874 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. May 17 00:29:45.107443 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. May 17 00:29:45.108159 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. May 17 00:29:45.114193 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 17 00:29:45.130528 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 17 00:29:45.135009 disk-uuid[568]: Primary Header is updated. May 17 00:29:45.135009 disk-uuid[568]: Secondary Entries is updated. May 17 00:29:45.135009 disk-uuid[568]: Secondary Header is updated. May 17 00:29:45.140459 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:29:45.144411 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:29:46.149120 disk-uuid[569]: The operation has completed successfully. May 17 00:29:46.149812 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:29:46.193310 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:29:46.193464 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 17 00:29:46.202511 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 17 00:29:46.205176 sh[583]: Success May 17 00:29:46.217411 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 17 00:29:46.255784 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 17 00:29:46.267466 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 17 00:29:46.270494 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 17 00:29:46.282769 kernel: BTRFS info (device dm-0): first mount of filesystem 7f88d479-6686-439c-8052-b96f0a9d77bc May 17 00:29:46.282797 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 17 00:29:46.284424 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 17 00:29:46.287141 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 17 00:29:46.287157 kernel: BTRFS info (device dm-0): using free space tree May 17 00:29:46.294407 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 17 00:29:46.295995 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 17 00:29:46.296930 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 17 00:29:46.301490 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 17 00:29:46.303650 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 17 00:29:46.313583 kernel: BTRFS info (device sda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:29:46.313608 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:29:46.315665 kernel: BTRFS info (device sda6): using free space tree May 17 00:29:46.324151 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:29:46.324171 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:29:46.335295 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:29:46.338418 kernel: BTRFS info (device sda6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:29:46.343516 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 17 00:29:46.353566 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 17 00:29:46.416694 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:29:46.429153 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:29:46.429429 ignition[694]: Ignition 2.19.0 May 17 00:29:46.429436 ignition[694]: Stage: fetch-offline May 17 00:29:46.432116 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:29:46.429469 ignition[694]: no configs at "/usr/lib/ignition/base.d" May 17 00:29:46.429478 ignition[694]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 17 00:29:46.429563 ignition[694]: parsed url from cmdline: "" May 17 00:29:46.429566 ignition[694]: no config URL provided May 17 00:29:46.429570 ignition[694]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:29:46.429578 ignition[694]: no config at "/usr/lib/ignition/user.ign" May 17 00:29:46.429583 ignition[694]: failed to fetch config: resource requires networking May 17 00:29:46.429740 ignition[694]: Ignition finished successfully May 17 00:29:46.449353 systemd-networkd[769]: lo: Link UP May 17 00:29:46.449365 systemd-networkd[769]: lo: Gained carrier May 17 00:29:46.451290 systemd-networkd[769]: Enumeration completed May 17 00:29:46.451845 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:29:46.452243 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:29:46.452248 systemd-networkd[769]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:29:46.453336 systemd[1]: Reached target network.target - Network. May 17 00:29:46.454710 systemd-networkd[769]: eth0: Link UP May 17 00:29:46.454714 systemd-networkd[769]: eth0: Gained carrier May 17 00:29:46.454723 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:29:46.464553 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 17 00:29:46.477824 ignition[774]: Ignition 2.19.0 May 17 00:29:46.477838 ignition[774]: Stage: fetch May 17 00:29:46.478013 ignition[774]: no configs at "/usr/lib/ignition/base.d" May 17 00:29:46.478026 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 17 00:29:46.478104 ignition[774]: parsed url from cmdline: "" May 17 00:29:46.478108 ignition[774]: no config URL provided May 17 00:29:46.478115 ignition[774]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:29:46.478124 ignition[774]: no config at "/usr/lib/ignition/user.ign" May 17 00:29:46.478145 ignition[774]: PUT http://169.254.169.254/v1/token: attempt #1 May 17 00:29:46.478322 ignition[774]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable May 17 00:29:46.679294 ignition[774]: PUT http://169.254.169.254/v1/token: attempt #2 May 17 00:29:46.679557 ignition[774]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable May 17 00:29:46.876467 systemd-networkd[769]: eth0: DHCPv4 address 172.232.0.241/24, gateway 172.232.0.1 acquired from 23.213.14.22 May 17 00:29:47.079848 ignition[774]: PUT http://169.254.169.254/v1/token: attempt #3 May 17 00:29:47.169376 ignition[774]: PUT result: OK May 17 00:29:47.169446 ignition[774]: GET http://169.254.169.254/v1/user-data: attempt #1 May 17 00:29:47.281275 ignition[774]: GET result: OK May 17 00:29:47.281449 ignition[774]: parsing config with SHA512: d25b9c58a24b87f13faf59ad4f1043755244e6b8d0cb0e3ad8bb369747ebccbb2bb51e1a3a83de3759ed717cb5fd52892a07fe4730c684ce738b52fad307ee4a May 17 00:29:47.284687 unknown[774]: fetched base config from "system" May 17 00:29:47.285294 unknown[774]: fetched base config from "system" May 17 00:29:47.285306 unknown[774]: fetched user config from "akamai" May 17 00:29:47.285577 ignition[774]: fetch: fetch complete May 17 00:29:47.285582 ignition[774]: fetch: fetch passed May 17 00:29:47.285626 ignition[774]: Ignition finished successfully May 17 00:29:47.288201 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 17 00:29:47.295522 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 17 00:29:47.316719 ignition[782]: Ignition 2.19.0 May 17 00:29:47.316737 ignition[782]: Stage: kargs May 17 00:29:47.316905 ignition[782]: no configs at "/usr/lib/ignition/base.d" May 17 00:29:47.320531 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 17 00:29:47.316916 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 17 00:29:47.317593 ignition[782]: kargs: kargs passed May 17 00:29:47.317643 ignition[782]: Ignition finished successfully May 17 00:29:47.326551 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 17 00:29:47.347695 ignition[788]: Ignition 2.19.0 May 17 00:29:47.347711 ignition[788]: Stage: disks May 17 00:29:47.347849 ignition[788]: no configs at "/usr/lib/ignition/base.d" May 17 00:29:47.347862 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 17 00:29:47.349828 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 17 00:29:47.348453 ignition[788]: disks: disks passed May 17 00:29:47.348492 ignition[788]: Ignition finished successfully May 17 00:29:47.351556 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 17 00:29:47.355987 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 17 00:29:47.357019 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:29:47.358169 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:29:47.359429 systemd[1]: Reached target basic.target - Basic System. May 17 00:29:47.367554 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 17 00:29:47.381949 systemd-fsck[796]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 17 00:29:47.384542 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 17 00:29:47.390507 systemd[1]: Mounting sysroot.mount - /sysroot... May 17 00:29:47.473412 kernel: EXT4-fs (sda9): mounted filesystem 278698a4-82b6-49b4-b6df-f7999ed4e35e r/w with ordered data mode. Quota mode: none. May 17 00:29:47.473718 systemd[1]: Mounted sysroot.mount - /sysroot. May 17 00:29:47.474921 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 17 00:29:47.480455 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:29:47.484590 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 17 00:29:47.485525 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 17 00:29:47.485627 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:29:47.485661 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:29:47.494434 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (804) May 17 00:29:47.499413 kernel: BTRFS info (device sda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:29:47.499439 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:29:47.499451 kernel: BTRFS info (device sda6): using free space tree May 17 00:29:47.498959 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 17 00:29:47.505526 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 17 00:29:47.509555 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:29:47.509578 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:29:47.512554 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:29:47.548042 initrd-setup-root[828]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:29:47.553102 initrd-setup-root[835]: cut: /sysroot/etc/group: No such file or directory May 17 00:29:47.558236 initrd-setup-root[842]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:29:47.563334 initrd-setup-root[849]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:29:47.656319 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 17 00:29:47.666486 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 17 00:29:47.669541 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 17 00:29:47.675533 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 17 00:29:47.676962 kernel: BTRFS info (device sda6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:29:47.686897 systemd-networkd[769]: eth0: Gained IPv6LL May 17 00:29:47.705238 ignition[916]: INFO : Ignition 2.19.0 May 17 00:29:47.706301 ignition[916]: INFO : Stage: mount May 17 00:29:47.706301 ignition[916]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:29:47.707683 ignition[916]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 17 00:29:47.707544 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 17 00:29:47.711196 ignition[916]: INFO : mount: mount passed May 17 00:29:47.711196 ignition[916]: INFO : Ignition finished successfully May 17 00:29:47.710893 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 17 00:29:47.716565 systemd[1]: Starting ignition-files.service - Ignition (files)... May 17 00:29:48.478634 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:29:48.491434 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (929) May 17 00:29:48.491517 kernel: BTRFS info (device sda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:29:48.493898 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:29:48.495612 kernel: BTRFS info (device sda6): using free space tree May 17 00:29:48.502189 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:29:48.502261 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:29:48.504758 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:29:48.521826 ignition[946]: INFO : Ignition 2.19.0 May 17 00:29:48.521826 ignition[946]: INFO : Stage: files May 17 00:29:48.523032 ignition[946]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:29:48.523032 ignition[946]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 17 00:29:48.523032 ignition[946]: DEBUG : files: compiled without relabeling support, skipping May 17 00:29:48.525026 ignition[946]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:29:48.525026 ignition[946]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:29:48.527225 ignition[946]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:29:48.528123 ignition[946]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:29:48.529200 unknown[946]: wrote ssh authorized keys file for user: core May 17 00:29:48.530021 ignition[946]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:29:48.530827 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 17 00:29:48.530827 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 17 00:29:48.807178 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 17 00:29:49.152184 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 17 00:29:49.152184 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 17 00:29:49.155009 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:29:49.155009 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:29:49.155009 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:29:49.155009 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:29:49.155009 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:29:49.155009 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:29:49.155009 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:29:49.155009 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:29:49.155009 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:29:49.155009 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:29:49.155009 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:29:49.155009 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:29:49.155009 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 May 17 00:29:49.883440 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 17 00:29:50.138312 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:29:50.138312 ignition[946]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 17 00:29:50.140829 ignition[946]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:29:50.141807 ignition[946]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:29:50.141807 ignition[946]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 17 00:29:50.141807 ignition[946]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 17 00:29:50.141807 ignition[946]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 17 00:29:50.141807 ignition[946]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 17 00:29:50.141807 ignition[946]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 17 00:29:50.141807 ignition[946]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" May 17 00:29:50.141807 ignition[946]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:29:50.141807 ignition[946]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:29:50.157464 ignition[946]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:29:50.157464 ignition[946]: INFO : files: files passed May 17 00:29:50.157464 ignition[946]: INFO : Ignition finished successfully May 17 00:29:50.145318 systemd[1]: Finished ignition-files.service - Ignition (files). May 17 00:29:50.155540 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 17 00:29:50.160655 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 17 00:29:50.162202 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:29:50.162301 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 17 00:29:50.172459 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:29:50.172459 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 17 00:29:50.174984 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:29:50.176572 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:29:50.178273 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 17 00:29:50.183519 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 17 00:29:50.206691 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:29:50.206801 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 17 00:29:50.208023 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 17 00:29:50.209065 systemd[1]: Reached target initrd.target - Initrd Default Target. May 17 00:29:50.210256 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 17 00:29:50.219508 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 17 00:29:50.230211 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:29:50.235511 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 17 00:29:50.243327 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 17 00:29:50.244011 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:29:50.245261 systemd[1]: Stopped target timers.target - Timer Units. May 17 00:29:50.246443 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:29:50.246537 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:29:50.248573 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 17 00:29:50.249312 systemd[1]: Stopped target basic.target - Basic System. May 17 00:29:50.250326 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 17 00:29:50.251345 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:29:50.252552 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 17 00:29:50.253770 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 17 00:29:50.254960 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:29:50.256167 systemd[1]: Stopped target sysinit.target - System Initialization. May 17 00:29:50.257373 systemd[1]: Stopped target local-fs.target - Local File Systems. May 17 00:29:50.258531 systemd[1]: Stopped target swap.target - Swaps. May 17 00:29:50.259511 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:29:50.259605 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 17 00:29:50.261564 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 17 00:29:50.262318 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:29:50.263507 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 17 00:29:50.263613 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:29:50.264754 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:29:50.264846 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 17 00:29:50.266313 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:29:50.266443 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:29:50.267187 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:29:50.267317 systemd[1]: Stopped ignition-files.service - Ignition (files). May 17 00:29:50.279795 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 17 00:29:50.282562 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 17 00:29:50.283156 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:29:50.283300 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:29:50.285164 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:29:50.285299 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:29:50.294275 ignition[998]: INFO : Ignition 2.19.0 May 17 00:29:50.294275 ignition[998]: INFO : Stage: umount May 17 00:29:50.299436 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:29:50.299436 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 17 00:29:50.299436 ignition[998]: INFO : umount: umount passed May 17 00:29:50.299436 ignition[998]: INFO : Ignition finished successfully May 17 00:29:50.297672 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:29:50.297785 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 17 00:29:50.300213 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:29:50.300422 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 17 00:29:50.303427 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:29:50.303479 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 17 00:29:50.304427 systemd[1]: ignition-fetch.service: Deactivated successfully. May 17 00:29:50.304484 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 17 00:29:50.305587 systemd[1]: Stopped target network.target - Network. May 17 00:29:50.306081 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:29:50.306131 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:29:50.306771 systemd[1]: Stopped target paths.target - Path Units. May 17 00:29:50.307315 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:29:50.312472 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:29:50.313095 systemd[1]: Stopped target slices.target - Slice Units. May 17 00:29:50.318342 systemd[1]: Stopped target sockets.target - Socket Units. May 17 00:29:50.318877 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:29:50.318924 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:29:50.320485 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:29:50.320532 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:29:50.328049 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:29:50.328101 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 17 00:29:50.330076 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 17 00:29:50.330124 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 17 00:29:50.330914 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 17 00:29:50.333146 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 17 00:29:50.335219 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:29:50.336159 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:29:50.336265 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 17 00:29:50.337581 systemd-networkd[769]: eth0: DHCPv6 lease lost May 17 00:29:50.341878 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:29:50.342000 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 17 00:29:50.347601 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:29:50.347750 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 17 00:29:50.353162 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:29:50.353229 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 17 00:29:50.359480 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 17 00:29:50.360596 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:29:50.361180 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:29:50.363084 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:29:50.363136 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 00:29:50.363777 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:29:50.363827 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 17 00:29:50.364833 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 17 00:29:50.364879 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:29:50.366148 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:29:50.370996 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:29:50.371101 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 17 00:29:50.378632 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:29:50.379304 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 17 00:29:50.380978 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:29:50.381115 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 17 00:29:50.382071 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:29:50.382235 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:29:50.383679 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:29:50.383750 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 17 00:29:50.384742 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:29:50.384784 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:29:50.385778 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:29:50.385828 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 17 00:29:50.387424 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:29:50.387472 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 17 00:29:50.388583 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:29:50.388636 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:29:50.395549 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 17 00:29:50.396985 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:29:50.397043 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:29:50.397661 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 17 00:29:50.397709 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:29:50.398292 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:29:50.398336 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:29:50.399585 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:29:50.399631 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:29:50.403598 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:29:50.403725 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 17 00:29:50.405257 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 17 00:29:50.416526 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 17 00:29:50.421934 systemd[1]: Switching root. May 17 00:29:50.452681 systemd-journald[177]: Journal stopped May 17 00:29:51.295898 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). May 17 00:29:51.295922 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:29:51.295934 kernel: SELinux: policy capability open_perms=1 May 17 00:29:51.295942 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:29:51.295954 kernel: SELinux: policy capability always_check_network=0 May 17 00:29:51.295962 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:29:51.295971 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:29:51.295980 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:29:51.295988 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:29:51.295997 kernel: audit: type=1403 audit(1747441790.576:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 00:29:51.296006 systemd[1]: Successfully loaded SELinux policy in 46.605ms. May 17 00:29:51.296019 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.990ms. May 17 00:29:51.296029 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:29:51.296039 systemd[1]: Detected virtualization kvm. May 17 00:29:51.296049 systemd[1]: Detected architecture x86-64. May 17 00:29:51.296058 systemd[1]: Detected first boot. May 17 00:29:51.296071 systemd[1]: Initializing machine ID from random generator. May 17 00:29:51.296081 zram_generator::config[1040]: No configuration found. May 17 00:29:51.296092 systemd[1]: Populated /etc with preset unit settings. May 17 00:29:51.296101 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 17 00:29:51.296110 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 17 00:29:51.296120 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 17 00:29:51.296130 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 17 00:29:51.296142 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 17 00:29:51.296151 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 17 00:29:51.296162 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 17 00:29:51.296171 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 17 00:29:51.296181 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 17 00:29:51.296191 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 17 00:29:51.296200 systemd[1]: Created slice user.slice - User and Session Slice. May 17 00:29:51.296214 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:29:51.296224 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:29:51.296234 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 17 00:29:51.296243 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 17 00:29:51.296253 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 17 00:29:51.296262 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:29:51.296272 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 17 00:29:51.296281 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:29:51.296293 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 17 00:29:51.296303 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 17 00:29:51.296315 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 17 00:29:51.296325 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 17 00:29:51.296335 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:29:51.296344 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:29:51.296354 systemd[1]: Reached target slices.target - Slice Units. May 17 00:29:51.296364 systemd[1]: Reached target swap.target - Swaps. May 17 00:29:51.296376 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 17 00:29:51.296406 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 17 00:29:51.296417 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:29:51.296427 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:29:51.296436 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:29:51.296451 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 17 00:29:51.296461 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 17 00:29:51.296470 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 17 00:29:51.296480 systemd[1]: Mounting media.mount - External Media Directory... May 17 00:29:51.296489 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:29:51.296499 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 17 00:29:51.296508 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 17 00:29:51.296518 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 17 00:29:51.296530 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:29:51.296540 systemd[1]: Reached target machines.target - Containers. May 17 00:29:51.296550 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 17 00:29:51.296560 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:29:51.296570 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:29:51.296579 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 17 00:29:51.296589 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:29:51.296599 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:29:51.296610 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:29:51.296620 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 17 00:29:51.296630 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:29:51.296639 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:29:51.296649 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 17 00:29:51.296660 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 17 00:29:51.296670 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 17 00:29:51.296679 systemd[1]: Stopped systemd-fsck-usr.service. May 17 00:29:51.296691 kernel: fuse: init (API version 7.39) May 17 00:29:51.296700 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:29:51.296709 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:29:51.296719 kernel: loop: module loaded May 17 00:29:51.296728 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 17 00:29:51.296738 kernel: ACPI: bus type drm_connector registered May 17 00:29:51.296747 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 17 00:29:51.296773 systemd-journald[1130]: Collecting audit messages is disabled. May 17 00:29:51.296794 systemd-journald[1130]: Journal started May 17 00:29:51.296812 systemd-journald[1130]: Runtime Journal (/run/log/journal/d26de5713d554485a99e9f928dafea3b) is 8.0M, max 78.3M, 70.3M free. May 17 00:29:51.039165 systemd[1]: Queued start job for default target multi-user.target. May 17 00:29:51.056503 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 17 00:29:51.056876 systemd[1]: systemd-journald.service: Deactivated successfully. May 17 00:29:51.303735 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:29:51.303763 systemd[1]: verity-setup.service: Deactivated successfully. May 17 00:29:51.308411 systemd[1]: Stopped verity-setup.service. May 17 00:29:51.308495 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:29:51.311476 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:29:51.312439 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 17 00:29:51.313066 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 17 00:29:51.313704 systemd[1]: Mounted media.mount - External Media Directory. May 17 00:29:51.314277 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 17 00:29:51.314886 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 17 00:29:51.315493 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 17 00:29:51.316191 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 17 00:29:51.317003 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:29:51.317841 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:29:51.318039 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 17 00:29:51.318837 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:29:51.319035 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:29:51.319978 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:29:51.320174 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:29:51.321072 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:29:51.321267 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:29:51.322159 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:29:51.322372 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 17 00:29:51.323248 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:29:51.323694 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:29:51.324524 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:29:51.325280 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 17 00:29:51.326136 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 17 00:29:51.340041 systemd[1]: Reached target network-pre.target - Preparation for Network. May 17 00:29:51.346522 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 17 00:29:51.351363 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 17 00:29:51.352337 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:29:51.352436 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:29:51.355140 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 17 00:29:51.360527 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 17 00:29:51.370816 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 17 00:29:51.371453 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:29:51.376686 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 17 00:29:51.378830 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 17 00:29:51.380496 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:29:51.384518 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 17 00:29:51.385506 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:29:51.392171 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:29:51.398347 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 17 00:29:51.409464 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:29:51.414672 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 17 00:29:51.415528 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 17 00:29:51.417154 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 17 00:29:51.435096 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:29:51.439608 systemd-journald[1130]: Time spent on flushing to /var/log/journal/d26de5713d554485a99e9f928dafea3b is 53.633ms for 976 entries. May 17 00:29:51.439608 systemd-journald[1130]: System Journal (/var/log/journal/d26de5713d554485a99e9f928dafea3b) is 8.0M, max 195.6M, 187.6M free. May 17 00:29:51.506572 systemd-journald[1130]: Received client request to flush runtime journal. May 17 00:29:51.506609 kernel: loop0: detected capacity change from 0 to 140768 May 17 00:29:51.449648 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 17 00:29:51.451749 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 17 00:29:51.452941 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 17 00:29:51.462647 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 17 00:29:51.469671 udevadm[1167]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 17 00:29:51.494955 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:29:51.497844 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:29:51.500438 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 17 00:29:51.512020 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 17 00:29:51.514095 systemd-tmpfiles[1161]: ACLs are not supported, ignoring. May 17 00:29:51.514115 systemd-tmpfiles[1161]: ACLs are not supported, ignoring. May 17 00:29:51.521951 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:29:51.529205 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:29:51.539364 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 17 00:29:51.544469 kernel: loop1: detected capacity change from 0 to 142488 May 17 00:29:51.587423 kernel: loop2: detected capacity change from 0 to 8 May 17 00:29:51.598218 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 17 00:29:51.606065 kernel: loop3: detected capacity change from 0 to 221472 May 17 00:29:51.604050 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:29:51.620233 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. May 17 00:29:51.620252 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. May 17 00:29:51.629324 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:29:51.649508 kernel: loop4: detected capacity change from 0 to 140768 May 17 00:29:51.668668 kernel: loop5: detected capacity change from 0 to 142488 May 17 00:29:51.683048 kernel: loop6: detected capacity change from 0 to 8 May 17 00:29:51.685426 kernel: loop7: detected capacity change from 0 to 221472 May 17 00:29:51.702406 (sd-merge)[1189]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. May 17 00:29:51.703068 (sd-merge)[1189]: Merged extensions into '/usr'. May 17 00:29:51.709780 systemd[1]: Reloading requested from client PID 1160 ('systemd-sysext') (unit systemd-sysext.service)... May 17 00:29:51.709798 systemd[1]: Reloading... May 17 00:29:51.810423 zram_generator::config[1211]: No configuration found. May 17 00:29:51.918213 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:29:51.941812 ldconfig[1155]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:29:51.951274 systemd[1]: Reloading finished in 241 ms. May 17 00:29:51.974074 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 17 00:29:51.976004 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 17 00:29:51.984542 systemd[1]: Starting ensure-sysext.service... May 17 00:29:51.986519 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:29:51.999454 systemd[1]: Reloading requested from client PID 1258 ('systemctl') (unit ensure-sysext.service)... May 17 00:29:51.999466 systemd[1]: Reloading... May 17 00:29:52.032101 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:29:52.032379 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 17 00:29:52.033129 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:29:52.033333 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. May 17 00:29:52.033820 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. May 17 00:29:52.036717 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:29:52.036781 systemd-tmpfiles[1259]: Skipping /boot May 17 00:29:52.049202 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:29:52.049252 systemd-tmpfiles[1259]: Skipping /boot May 17 00:29:52.097406 zram_generator::config[1285]: No configuration found. May 17 00:29:52.194402 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:29:52.226900 systemd[1]: Reloading finished in 227 ms. May 17 00:29:52.244014 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 17 00:29:52.248753 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:29:52.258573 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:29:52.261546 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 17 00:29:52.266017 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 17 00:29:52.270711 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:29:52.273300 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:29:52.276645 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 17 00:29:52.282929 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:29:52.284769 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:29:52.293662 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:29:52.296270 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:29:52.299562 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:29:52.300267 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:29:52.300352 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:29:52.301128 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:29:52.301500 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:29:52.314850 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 17 00:29:52.315811 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:29:52.316548 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:29:52.318035 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:29:52.319588 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:29:52.325645 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:29:52.327710 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:29:52.336768 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:29:52.340565 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:29:52.344616 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:29:52.345146 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:29:52.345239 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:29:52.347754 systemd-udevd[1336]: Using default interface naming scheme 'v255'. May 17 00:29:52.351159 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 17 00:29:52.361586 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 17 00:29:52.364440 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:29:52.364667 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:29:52.367276 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:29:52.368028 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:29:52.370035 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:29:52.370190 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:29:52.384027 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 17 00:29:52.387312 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:29:52.388470 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:29:52.393569 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:29:52.395518 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:29:52.399593 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:29:52.403637 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:29:52.404645 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:29:52.404728 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:29:52.405962 systemd[1]: Finished ensure-sysext.service. May 17 00:29:52.407596 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:29:52.407798 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:29:52.409775 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:29:52.410457 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:29:52.414272 augenrules[1371]: No rules May 17 00:29:52.415707 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:29:52.429112 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 17 00:29:52.430203 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:29:52.432369 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:29:52.435142 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 17 00:29:52.436237 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 17 00:29:52.437907 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:29:52.438639 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:29:52.442378 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 17 00:29:52.443303 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:29:52.443837 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:29:52.456538 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:29:52.457702 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:29:52.457731 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:29:52.533526 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 17 00:29:52.589711 systemd-networkd[1399]: lo: Link UP May 17 00:29:52.589723 systemd-networkd[1399]: lo: Gained carrier May 17 00:29:52.592129 systemd-networkd[1399]: Enumeration completed May 17 00:29:52.592219 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:29:52.600611 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 17 00:29:52.601199 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 17 00:29:52.601738 systemd[1]: Reached target time-set.target - System Time Set. May 17 00:29:52.603845 systemd-networkd[1399]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:29:52.603859 systemd-networkd[1399]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:29:52.606266 systemd-networkd[1399]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:29:52.606302 systemd-networkd[1399]: eth0: Link UP May 17 00:29:52.606306 systemd-networkd[1399]: eth0: Gained carrier May 17 00:29:52.606314 systemd-networkd[1399]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:29:52.606514 systemd-resolved[1335]: Positive Trust Anchors: May 17 00:29:52.606664 systemd-resolved[1335]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:29:52.606693 systemd-resolved[1335]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:29:52.611877 systemd-resolved[1335]: Defaulting to hostname 'linux'. May 17 00:29:52.613793 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:29:52.614316 systemd[1]: Reached target network.target - Network. May 17 00:29:52.615701 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:29:52.624440 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 17 00:29:52.634413 kernel: ACPI: button: Power Button [PWRF] May 17 00:29:52.640428 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 17 00:29:52.643756 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 17 00:29:52.646656 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 17 00:29:52.662439 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1400) May 17 00:29:52.674406 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 May 17 00:29:52.727298 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:29:52.741416 kernel: EDAC MC: Ver: 3.0.0 May 17 00:29:52.749420 kernel: mousedev: PS/2 mouse device common for all mice May 17 00:29:52.766068 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 17 00:29:52.771948 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 17 00:29:52.782946 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 17 00:29:52.790574 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 17 00:29:52.791607 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 17 00:29:52.804334 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:29:52.826590 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 17 00:29:52.859125 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:29:52.860700 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:29:52.861293 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:29:52.861987 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 17 00:29:52.862831 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 17 00:29:52.863664 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 17 00:29:52.864571 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 17 00:29:52.865167 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 17 00:29:52.865776 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:29:52.865812 systemd[1]: Reached target paths.target - Path Units. May 17 00:29:52.866522 systemd[1]: Reached target timers.target - Timer Units. May 17 00:29:52.868109 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 17 00:29:52.870666 systemd[1]: Starting docker.socket - Docker Socket for the API... May 17 00:29:52.885990 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 17 00:29:52.887837 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 17 00:29:52.889010 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 17 00:29:52.889681 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:29:52.890327 systemd[1]: Reached target basic.target - Basic System. May 17 00:29:52.890911 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 17 00:29:52.890949 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 17 00:29:52.897549 systemd[1]: Starting containerd.service - containerd container runtime... May 17 00:29:52.902529 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 17 00:29:52.903351 lvm[1440]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:29:52.911487 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 17 00:29:52.913764 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 17 00:29:52.919585 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 17 00:29:52.922159 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 17 00:29:52.926601 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 17 00:29:52.933502 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 17 00:29:52.949684 jq[1444]: false May 17 00:29:52.945760 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 17 00:29:52.957554 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 17 00:29:52.962181 extend-filesystems[1445]: Found loop4 May 17 00:29:52.962181 extend-filesystems[1445]: Found loop5 May 17 00:29:52.962181 extend-filesystems[1445]: Found loop6 May 17 00:29:52.962181 extend-filesystems[1445]: Found loop7 May 17 00:29:52.962181 extend-filesystems[1445]: Found sda May 17 00:29:52.962181 extend-filesystems[1445]: Found sda1 May 17 00:29:52.962181 extend-filesystems[1445]: Found sda2 May 17 00:29:52.962181 extend-filesystems[1445]: Found sda3 May 17 00:29:52.962181 extend-filesystems[1445]: Found usr May 17 00:29:52.962181 extend-filesystems[1445]: Found sda4 May 17 00:29:52.962181 extend-filesystems[1445]: Found sda6 May 17 00:29:52.962181 extend-filesystems[1445]: Found sda7 May 17 00:29:52.962181 extend-filesystems[1445]: Found sda9 May 17 00:29:52.962181 extend-filesystems[1445]: Checking size of /dev/sda9 May 17 00:29:52.969875 systemd[1]: Starting systemd-logind.service - User Login Management... May 17 00:29:52.980006 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 00:29:52.980517 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 17 00:29:52.981829 systemd[1]: Starting update-engine.service - Update Engine... May 17 00:29:52.985513 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 17 00:29:52.990442 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 17 00:29:52.999214 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:29:52.999471 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 17 00:29:53.002022 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:29:53.002229 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 17 00:29:53.019895 extend-filesystems[1445]: Resized partition /dev/sda9 May 17 00:29:53.021512 systemd-networkd[1399]: eth0: DHCPv4 address 172.232.0.241/24, gateway 172.232.0.1 acquired from 23.213.14.22 May 17 00:29:53.023784 systemd-timesyncd[1382]: Network configuration changed, trying to establish connection. May 17 00:29:53.029372 dbus-daemon[1443]: [system] SELinux support is enabled May 17 00:29:53.030673 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 17 00:29:53.036834 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:29:53.036879 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 17 00:29:53.037579 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:29:53.037608 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 17 00:29:53.043036 dbus-daemon[1443]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1399 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 17 00:29:53.048981 (ntainerd)[1476]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 17 00:29:53.051806 dbus-daemon[1443]: [system] Successfully activated service 'org.freedesktop.systemd1' May 17 00:29:53.064855 extend-filesystems[1475]: resize2fs 1.47.1 (20-May-2024) May 17 00:29:53.064543 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... May 17 00:29:53.071422 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks May 17 00:29:53.074116 jq[1459]: true May 17 00:29:53.076232 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:29:53.077768 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 17 00:29:53.083514 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1391) May 17 00:29:53.088068 tar[1462]: linux-amd64/helm May 17 00:29:53.113574 systemd-logind[1455]: Watching system buttons on /dev/input/event1 (Power Button) May 17 00:29:53.113602 systemd-logind[1455]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 17 00:29:53.117876 systemd-logind[1455]: New seat seat0. May 17 00:29:53.119032 update_engine[1458]: I20250517 00:29:53.118587 1458 main.cc:92] Flatcar Update Engine starting May 17 00:29:53.122525 systemd[1]: Started systemd-logind.service - User Login Management. May 17 00:29:53.124750 jq[1487]: true May 17 00:29:53.134353 systemd[1]: Started update-engine.service - Update Engine. May 17 00:29:53.135262 update_engine[1458]: I20250517 00:29:53.135219 1458 update_check_scheduler.cc:74] Next update check in 4m56s May 17 00:29:53.144598 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 17 00:29:53.188475 coreos-metadata[1442]: May 17 00:29:53.179 INFO Putting http://169.254.169.254/v1/token: Attempt #1 May 17 00:29:53.289593 coreos-metadata[1442]: May 17 00:29:53.289 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 May 17 00:29:54.463097 systemd-resolved[1335]: Clock change detected. Flushing caches. May 17 00:29:54.463232 systemd-timesyncd[1382]: Contacted time server 23.186.168.131:123 (0.flatcar.pool.ntp.org). May 17 00:29:54.463302 systemd-timesyncd[1382]: Initial clock synchronization to Sat 2025-05-17 00:29:54.463051 UTC. May 17 00:29:54.476007 bash[1509]: Updated "/home/core/.ssh/authorized_keys" May 17 00:29:54.479537 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 17 00:29:54.490635 systemd[1]: Starting sshkeys.service... May 17 00:29:54.513789 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 17 00:29:54.521660 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 17 00:29:54.523547 dbus-daemon[1443]: [system] Successfully activated service 'org.freedesktop.hostname1' May 17 00:29:54.523639 systemd[1]: Started systemd-hostnamed.service - Hostname Service. May 17 00:29:54.524922 dbus-daemon[1443]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1482 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 17 00:29:54.532750 systemd[1]: Starting polkit.service - Authorization Manager... May 17 00:29:54.549951 polkitd[1518]: Started polkitd version 121 May 17 00:29:54.554119 containerd[1476]: time="2025-05-17T00:29:54.554065070Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 17 00:29:54.558648 kernel: EXT4-fs (sda9): resized filesystem to 20360187 May 17 00:29:54.569360 polkitd[1518]: Loading rules from directory /etc/polkit-1/rules.d May 17 00:29:54.570969 polkitd[1518]: Loading rules from directory /usr/share/polkit-1/rules.d May 17 00:29:54.571075 containerd[1476]: time="2025-05-17T00:29:54.571048580Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:29:54.575475 containerd[1476]: time="2025-05-17T00:29:54.575273130Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.90-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:29:54.575475 containerd[1476]: time="2025-05-17T00:29:54.575297920Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:29:54.575475 containerd[1476]: time="2025-05-17T00:29:54.575314000Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:29:54.573662 systemd[1]: Started polkit.service - Authorization Manager. May 17 00:29:54.573017 polkitd[1518]: Finished loading, compiling and executing 2 rules May 17 00:29:54.573531 dbus-daemon[1443]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 17 00:29:54.575030 polkitd[1518]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 17 00:29:54.575873 containerd[1476]: time="2025-05-17T00:29:54.575854700Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 17 00:29:54.576304 containerd[1476]: time="2025-05-17T00:29:54.576285870Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 17 00:29:54.576419 extend-filesystems[1475]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required May 17 00:29:54.576419 extend-filesystems[1475]: old_desc_blocks = 1, new_desc_blocks = 10 May 17 00:29:54.576419 extend-filesystems[1475]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. May 17 00:29:54.591546 extend-filesystems[1445]: Resized filesystem in /dev/sda9 May 17 00:29:54.578877 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:29:54.592781 containerd[1476]: time="2025-05-17T00:29:54.577264670Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:29:54.592781 containerd[1476]: time="2025-05-17T00:29:54.577281940Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:29:54.592781 containerd[1476]: time="2025-05-17T00:29:54.577489310Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:29:54.592781 containerd[1476]: time="2025-05-17T00:29:54.577503890Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:29:54.592781 containerd[1476]: time="2025-05-17T00:29:54.577516020Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:29:54.592781 containerd[1476]: time="2025-05-17T00:29:54.577525330Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:29:54.592781 containerd[1476]: time="2025-05-17T00:29:54.577622490Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:29:54.592781 containerd[1476]: time="2025-05-17T00:29:54.577846170Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:29:54.592781 containerd[1476]: time="2025-05-17T00:29:54.578867730Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:29:54.592781 containerd[1476]: time="2025-05-17T00:29:54.578883660Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:29:54.592781 containerd[1476]: time="2025-05-17T00:29:54.578998840Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:29:54.579131 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 17 00:29:54.594656 containerd[1476]: time="2025-05-17T00:29:54.579053550Z" level=info msg="metadata content store policy set" policy=shared May 17 00:29:54.595224 locksmithd[1490]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:29:54.599339 containerd[1476]: time="2025-05-17T00:29:54.597542970Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:29:54.599339 containerd[1476]: time="2025-05-17T00:29:54.597578730Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:29:54.599339 containerd[1476]: time="2025-05-17T00:29:54.597596830Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 17 00:29:54.599339 containerd[1476]: time="2025-05-17T00:29:54.597608530Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 17 00:29:54.599339 containerd[1476]: time="2025-05-17T00:29:54.597619100Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:29:54.599339 containerd[1476]: time="2025-05-17T00:29:54.597727600Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:29:54.599339 containerd[1476]: time="2025-05-17T00:29:54.597876530Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:29:54.599339 containerd[1476]: time="2025-05-17T00:29:54.597960590Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 17 00:29:54.599339 containerd[1476]: time="2025-05-17T00:29:54.597971830Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 17 00:29:54.599339 containerd[1476]: time="2025-05-17T00:29:54.597981670Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 17 00:29:54.599339 containerd[1476]: time="2025-05-17T00:29:54.597992520Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:29:54.599339 containerd[1476]: time="2025-05-17T00:29:54.598002310Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:29:54.599339 containerd[1476]: time="2025-05-17T00:29:54.598012630Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:29:54.599339 containerd[1476]: time="2025-05-17T00:29:54.598022540Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:29:54.599554 containerd[1476]: time="2025-05-17T00:29:54.598033050Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:29:54.599554 containerd[1476]: time="2025-05-17T00:29:54.598043140Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:29:54.599554 containerd[1476]: time="2025-05-17T00:29:54.598052850Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:29:54.599554 containerd[1476]: time="2025-05-17T00:29:54.598060730Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:29:54.599554 containerd[1476]: time="2025-05-17T00:29:54.598076810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:29:54.599554 containerd[1476]: time="2025-05-17T00:29:54.598088200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:29:54.599554 containerd[1476]: time="2025-05-17T00:29:54.598097820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:29:54.599554 containerd[1476]: time="2025-05-17T00:29:54.598107630Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:29:54.599554 containerd[1476]: time="2025-05-17T00:29:54.598117510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:29:54.599554 containerd[1476]: time="2025-05-17T00:29:54.598127410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:29:54.599554 containerd[1476]: time="2025-05-17T00:29:54.598136480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:29:54.599554 containerd[1476]: time="2025-05-17T00:29:54.598145740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:29:54.599554 containerd[1476]: time="2025-05-17T00:29:54.598155250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 17 00:29:54.599554 containerd[1476]: time="2025-05-17T00:29:54.598167040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 17 00:29:54.599731 containerd[1476]: time="2025-05-17T00:29:54.598182520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:29:54.599731 containerd[1476]: time="2025-05-17T00:29:54.598191780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 17 00:29:54.599731 containerd[1476]: time="2025-05-17T00:29:54.598201650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:29:54.599731 containerd[1476]: time="2025-05-17T00:29:54.598213540Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 17 00:29:54.599731 containerd[1476]: time="2025-05-17T00:29:54.598230830Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 17 00:29:54.599731 containerd[1476]: time="2025-05-17T00:29:54.598244210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:29:54.599731 containerd[1476]: time="2025-05-17T00:29:54.598252830Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:29:54.599731 containerd[1476]: time="2025-05-17T00:29:54.598301470Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:29:54.599731 containerd[1476]: time="2025-05-17T00:29:54.598314720Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 17 00:29:54.599731 containerd[1476]: time="2025-05-17T00:29:54.598322440Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:29:54.599731 containerd[1476]: time="2025-05-17T00:29:54.598330720Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 17 00:29:54.599731 containerd[1476]: time="2025-05-17T00:29:54.598337300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:29:54.599731 containerd[1476]: time="2025-05-17T00:29:54.598347370Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 17 00:29:54.599731 containerd[1476]: time="2025-05-17T00:29:54.598355210Z" level=info msg="NRI interface is disabled by configuration." May 17 00:29:54.599901 containerd[1476]: time="2025-05-17T00:29:54.598362700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:29:54.599919 containerd[1476]: time="2025-05-17T00:29:54.598578000Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:29:54.599919 containerd[1476]: time="2025-05-17T00:29:54.598629320Z" level=info msg="Connect containerd service" May 17 00:29:54.599919 containerd[1476]: time="2025-05-17T00:29:54.598658360Z" level=info msg="using legacy CRI server" May 17 00:29:54.599919 containerd[1476]: time="2025-05-17T00:29:54.598664310Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 17 00:29:54.599919 containerd[1476]: time="2025-05-17T00:29:54.598734180Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:29:54.599919 containerd[1476]: time="2025-05-17T00:29:54.599178820Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:29:54.604010 containerd[1476]: time="2025-05-17T00:29:54.602553090Z" level=info msg="Start subscribing containerd event" May 17 00:29:54.604010 containerd[1476]: time="2025-05-17T00:29:54.602586560Z" level=info msg="Start recovering state" May 17 00:29:54.604010 containerd[1476]: time="2025-05-17T00:29:54.602636500Z" level=info msg="Start event monitor" May 17 00:29:54.604010 containerd[1476]: time="2025-05-17T00:29:54.602656410Z" level=info msg="Start snapshots syncer" May 17 00:29:54.604010 containerd[1476]: time="2025-05-17T00:29:54.602663580Z" level=info msg="Start cni network conf syncer for default" May 17 00:29:54.604010 containerd[1476]: time="2025-05-17T00:29:54.602670090Z" level=info msg="Start streaming server" May 17 00:29:54.604010 containerd[1476]: time="2025-05-17T00:29:54.602944190Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:29:54.604010 containerd[1476]: time="2025-05-17T00:29:54.603000500Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:29:54.603165 systemd[1]: Started containerd.service - containerd container runtime. May 17 00:29:54.607898 containerd[1476]: time="2025-05-17T00:29:54.607881540Z" level=info msg="containerd successfully booted in 0.049860s" May 17 00:29:54.611064 systemd-resolved[1335]: System hostname changed to '172-232-0-241'. May 17 00:29:54.611838 systemd-hostnamed[1482]: Hostname set to <172-232-0-241> (transient) May 17 00:29:54.623334 coreos-metadata[1516]: May 17 00:29:54.623 INFO Putting http://169.254.169.254/v1/token: Attempt #1 May 17 00:29:54.627284 sshd_keygen[1481]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:29:54.636371 coreos-metadata[1442]: May 17 00:29:54.636 INFO Fetch successful May 17 00:29:54.636439 coreos-metadata[1442]: May 17 00:29:54.636 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 May 17 00:29:54.648514 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 17 00:29:54.655687 systemd[1]: Starting issuegen.service - Generate /run/issue... May 17 00:29:54.663065 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:29:54.663319 systemd[1]: Finished issuegen.service - Generate /run/issue. May 17 00:29:54.673490 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 17 00:29:54.683961 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 17 00:29:54.692997 systemd[1]: Started getty@tty1.service - Getty on tty1. May 17 00:29:54.695335 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 17 00:29:54.696102 systemd[1]: Reached target getty.target - Login Prompts. May 17 00:29:54.716013 coreos-metadata[1516]: May 17 00:29:54.715 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 May 17 00:29:54.853693 coreos-metadata[1516]: May 17 00:29:54.853 INFO Fetch successful May 17 00:29:54.873387 update-ssh-keys[1555]: Updated "/home/core/.ssh/authorized_keys" May 17 00:29:54.874911 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 17 00:29:54.877528 systemd[1]: Finished sshkeys.service. May 17 00:29:54.890999 coreos-metadata[1442]: May 17 00:29:54.890 INFO Fetch successful May 17 00:29:54.926049 tar[1462]: linux-amd64/LICENSE May 17 00:29:54.926147 tar[1462]: linux-amd64/README.md May 17 00:29:54.939143 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 17 00:29:54.971011 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 17 00:29:54.971895 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 17 00:29:55.297651 systemd-networkd[1399]: eth0: Gained IPv6LL May 17 00:29:55.301389 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 17 00:29:55.302597 systemd[1]: Reached target network-online.target - Network is Online. May 17 00:29:55.309698 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:29:55.317738 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 17 00:29:55.336229 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 17 00:29:56.089379 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:29:56.090469 systemd[1]: Reached target multi-user.target - Multi-User System. May 17 00:29:56.092127 systemd[1]: Startup finished in 678ms (kernel) + 6.906s (initrd) + 4.414s (userspace) = 11.999s. May 17 00:29:56.109736 (kubelet)[1596]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:29:56.543630 kubelet[1596]: E0517 00:29:56.543165 1596 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:29:56.548831 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:29:56.549002 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:29:59.183562 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 17 00:29:59.188890 systemd[1]: Started sshd@0-172.232.0.241:22-139.178.89.65:37606.service - OpenSSH per-connection server daemon (139.178.89.65:37606). May 17 00:29:59.537396 sshd[1607]: Accepted publickey for core from 139.178.89.65 port 37606 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:29:59.539683 sshd[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:29:59.549089 systemd-logind[1455]: New session 1 of user core. May 17 00:29:59.550514 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 17 00:29:59.555649 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 17 00:29:59.572989 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 17 00:29:59.580709 systemd[1]: Starting user@500.service - User Manager for UID 500... May 17 00:29:59.585659 (systemd)[1611]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:29:59.676406 systemd[1611]: Queued start job for default target default.target. May 17 00:29:59.684575 systemd[1611]: Created slice app.slice - User Application Slice. May 17 00:29:59.684602 systemd[1611]: Reached target paths.target - Paths. May 17 00:29:59.684615 systemd[1611]: Reached target timers.target - Timers. May 17 00:29:59.685951 systemd[1611]: Starting dbus.socket - D-Bus User Message Bus Socket... May 17 00:29:59.697171 systemd[1611]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 17 00:29:59.697292 systemd[1611]: Reached target sockets.target - Sockets. May 17 00:29:59.697305 systemd[1611]: Reached target basic.target - Basic System. May 17 00:29:59.697340 systemd[1611]: Reached target default.target - Main User Target. May 17 00:29:59.697371 systemd[1611]: Startup finished in 103ms. May 17 00:29:59.697537 systemd[1]: Started user@500.service - User Manager for UID 500. May 17 00:29:59.699237 systemd[1]: Started session-1.scope - Session 1 of User core. May 17 00:29:59.958172 systemd[1]: Started sshd@1-172.232.0.241:22-139.178.89.65:37614.service - OpenSSH per-connection server daemon (139.178.89.65:37614). May 17 00:30:00.279556 sshd[1622]: Accepted publickey for core from 139.178.89.65 port 37614 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:30:00.281343 sshd[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:30:00.286321 systemd-logind[1455]: New session 2 of user core. May 17 00:30:00.295738 systemd[1]: Started session-2.scope - Session 2 of User core. May 17 00:30:00.524224 sshd[1622]: pam_unix(sshd:session): session closed for user core May 17 00:30:00.528641 systemd-logind[1455]: Session 2 logged out. Waiting for processes to exit. May 17 00:30:00.529772 systemd[1]: sshd@1-172.232.0.241:22-139.178.89.65:37614.service: Deactivated successfully. May 17 00:30:00.531843 systemd[1]: session-2.scope: Deactivated successfully. May 17 00:30:00.532657 systemd-logind[1455]: Removed session 2. May 17 00:30:00.587006 systemd[1]: Started sshd@2-172.232.0.241:22-139.178.89.65:37622.service - OpenSSH per-connection server daemon (139.178.89.65:37622). May 17 00:30:00.917832 sshd[1629]: Accepted publickey for core from 139.178.89.65 port 37622 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:30:00.919400 sshd[1629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:30:00.924594 systemd-logind[1455]: New session 3 of user core. May 17 00:30:00.934733 systemd[1]: Started session-3.scope - Session 3 of User core. May 17 00:30:01.170667 sshd[1629]: pam_unix(sshd:session): session closed for user core May 17 00:30:01.175000 systemd[1]: sshd@2-172.232.0.241:22-139.178.89.65:37622.service: Deactivated successfully. May 17 00:30:01.177414 systemd[1]: session-3.scope: Deactivated successfully. May 17 00:30:01.179014 systemd-logind[1455]: Session 3 logged out. Waiting for processes to exit. May 17 00:30:01.179976 systemd-logind[1455]: Removed session 3. May 17 00:30:01.233201 systemd[1]: Started sshd@3-172.232.0.241:22-139.178.89.65:37626.service - OpenSSH per-connection server daemon (139.178.89.65:37626). May 17 00:30:01.569894 sshd[1636]: Accepted publickey for core from 139.178.89.65 port 37626 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:30:01.571777 sshd[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:30:01.575485 systemd-logind[1455]: New session 4 of user core. May 17 00:30:01.579559 systemd[1]: Started session-4.scope - Session 4 of User core. May 17 00:30:01.827137 sshd[1636]: pam_unix(sshd:session): session closed for user core May 17 00:30:01.831013 systemd[1]: sshd@3-172.232.0.241:22-139.178.89.65:37626.service: Deactivated successfully. May 17 00:30:01.832959 systemd[1]: session-4.scope: Deactivated successfully. May 17 00:30:01.834185 systemd-logind[1455]: Session 4 logged out. Waiting for processes to exit. May 17 00:30:01.835328 systemd-logind[1455]: Removed session 4. May 17 00:30:01.898073 systemd[1]: Started sshd@4-172.232.0.241:22-139.178.89.65:37628.service - OpenSSH per-connection server daemon (139.178.89.65:37628). May 17 00:30:02.241524 sshd[1643]: Accepted publickey for core from 139.178.89.65 port 37628 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:30:02.243666 sshd[1643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:30:02.248918 systemd-logind[1455]: New session 5 of user core. May 17 00:30:02.255561 systemd[1]: Started session-5.scope - Session 5 of User core. May 17 00:30:02.456279 sudo[1646]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 17 00:30:02.456627 sudo[1646]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:30:02.470335 sudo[1646]: pam_unix(sudo:session): session closed for user root May 17 00:30:02.526050 sshd[1643]: pam_unix(sshd:session): session closed for user core May 17 00:30:02.529647 systemd[1]: sshd@4-172.232.0.241:22-139.178.89.65:37628.service: Deactivated successfully. May 17 00:30:02.531748 systemd[1]: session-5.scope: Deactivated successfully. May 17 00:30:02.533740 systemd-logind[1455]: Session 5 logged out. Waiting for processes to exit. May 17 00:30:02.534816 systemd-logind[1455]: Removed session 5. May 17 00:30:02.590087 systemd[1]: Started sshd@5-172.232.0.241:22-139.178.89.65:37630.service - OpenSSH per-connection server daemon (139.178.89.65:37630). May 17 00:30:02.927129 sshd[1651]: Accepted publickey for core from 139.178.89.65 port 37630 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:30:02.929129 sshd[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:30:02.934176 systemd-logind[1455]: New session 6 of user core. May 17 00:30:02.943590 systemd[1]: Started session-6.scope - Session 6 of User core. May 17 00:30:03.128183 sudo[1655]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 17 00:30:03.128541 sudo[1655]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:30:03.132219 sudo[1655]: pam_unix(sudo:session): session closed for user root May 17 00:30:03.137544 sudo[1654]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 17 00:30:03.137850 sudo[1654]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:30:03.155622 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 17 00:30:03.157798 auditctl[1658]: No rules May 17 00:30:03.158329 systemd[1]: audit-rules.service: Deactivated successfully. May 17 00:30:03.158660 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 17 00:30:03.160601 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:30:03.189725 augenrules[1676]: No rules May 17 00:30:03.191183 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:30:03.192803 sudo[1654]: pam_unix(sudo:session): session closed for user root May 17 00:30:03.244928 sshd[1651]: pam_unix(sshd:session): session closed for user core May 17 00:30:03.249995 systemd[1]: sshd@5-172.232.0.241:22-139.178.89.65:37630.service: Deactivated successfully. May 17 00:30:03.252389 systemd[1]: session-6.scope: Deactivated successfully. May 17 00:30:03.253201 systemd-logind[1455]: Session 6 logged out. Waiting for processes to exit. May 17 00:30:03.254247 systemd-logind[1455]: Removed session 6. May 17 00:30:03.306695 systemd[1]: Started sshd@6-172.232.0.241:22-139.178.89.65:37640.service - OpenSSH per-connection server daemon (139.178.89.65:37640). May 17 00:30:03.659456 sshd[1684]: Accepted publickey for core from 139.178.89.65 port 37640 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:30:03.661561 sshd[1684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:30:03.667417 systemd-logind[1455]: New session 7 of user core. May 17 00:30:03.676551 systemd[1]: Started session-7.scope - Session 7 of User core. May 17 00:30:03.863170 sudo[1687]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:30:03.863531 sudo[1687]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:30:04.132960 (dockerd)[1703]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 17 00:30:04.133086 systemd[1]: Starting docker.service - Docker Application Container Engine... May 17 00:30:04.394978 dockerd[1703]: time="2025-05-17T00:30:04.394829380Z" level=info msg="Starting up" May 17 00:30:04.466120 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport579890325-merged.mount: Deactivated successfully. May 17 00:30:04.489763 dockerd[1703]: time="2025-05-17T00:30:04.489542540Z" level=info msg="Loading containers: start." May 17 00:30:04.590741 kernel: Initializing XFRM netlink socket May 17 00:30:04.671475 systemd-networkd[1399]: docker0: Link UP May 17 00:30:04.682136 dockerd[1703]: time="2025-05-17T00:30:04.682085060Z" level=info msg="Loading containers: done." May 17 00:30:04.699048 dockerd[1703]: time="2025-05-17T00:30:04.699004680Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 00:30:04.699196 dockerd[1703]: time="2025-05-17T00:30:04.699085470Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 17 00:30:04.699196 dockerd[1703]: time="2025-05-17T00:30:04.699185620Z" level=info msg="Daemon has completed initialization" May 17 00:30:04.745190 dockerd[1703]: time="2025-05-17T00:30:04.744445880Z" level=info msg="API listen on /run/docker.sock" May 17 00:30:04.744593 systemd[1]: Started docker.service - Docker Application Container Engine. May 17 00:30:05.300700 containerd[1476]: time="2025-05-17T00:30:05.300441630Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\"" May 17 00:30:05.462527 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1509007981-merged.mount: Deactivated successfully. May 17 00:30:06.263656 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3456056044.mount: Deactivated successfully. May 17 00:30:06.799482 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 00:30:06.811143 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:30:06.949034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:30:06.953653 (kubelet)[1902]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:30:06.987693 kubelet[1902]: E0517 00:30:06.987588 1902 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:30:06.993219 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:30:06.993392 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:30:07.522392 containerd[1476]: time="2025-05-17T00:30:07.522310020Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:07.523446 containerd[1476]: time="2025-05-17T00:30:07.523389830Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.9: active requests=0, bytes read=28078845" May 17 00:30:07.524035 containerd[1476]: time="2025-05-17T00:30:07.523972240Z" level=info msg="ImageCreate event name:\"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:07.527289 containerd[1476]: time="2025-05-17T00:30:07.526935050Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:07.528005 containerd[1476]: time="2025-05-17T00:30:07.527968080Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.9\" with image id \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\", size \"28075645\" in 2.22748524s" May 17 00:30:07.528058 containerd[1476]: time="2025-05-17T00:30:07.528012280Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\" returns image reference \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\"" May 17 00:30:07.534479 containerd[1476]: time="2025-05-17T00:30:07.534231310Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\"" May 17 00:30:09.216133 containerd[1476]: time="2025-05-17T00:30:09.216076490Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:09.216955 containerd[1476]: time="2025-05-17T00:30:09.216911090Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.9: active requests=0, bytes read=24713522" May 17 00:30:09.218347 containerd[1476]: time="2025-05-17T00:30:09.217396390Z" level=info msg="ImageCreate event name:\"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:09.219813 containerd[1476]: time="2025-05-17T00:30:09.219471940Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:09.220376 containerd[1476]: time="2025-05-17T00:30:09.220351890Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.9\" with image id \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\", size \"26315362\" in 1.68580962s" May 17 00:30:09.220409 containerd[1476]: time="2025-05-17T00:30:09.220381360Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\" returns image reference \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\"" May 17 00:30:09.220888 containerd[1476]: time="2025-05-17T00:30:09.220873310Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\"" May 17 00:30:10.329710 containerd[1476]: time="2025-05-17T00:30:10.329650770Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:10.330547 containerd[1476]: time="2025-05-17T00:30:10.330397310Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.9: active requests=0, bytes read=18784311" May 17 00:30:10.331446 containerd[1476]: time="2025-05-17T00:30:10.330962220Z" level=info msg="ImageCreate event name:\"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:10.332879 containerd[1476]: time="2025-05-17T00:30:10.332838520Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:10.333751 containerd[1476]: time="2025-05-17T00:30:10.333596950Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.9\" with image id \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\", size \"20386169\" in 1.11270147s" May 17 00:30:10.333751 containerd[1476]: time="2025-05-17T00:30:10.333622310Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\" returns image reference \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\"" May 17 00:30:10.334194 containerd[1476]: time="2025-05-17T00:30:10.334179510Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 17 00:30:11.424444 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2012036382.mount: Deactivated successfully. May 17 00:30:11.687228 containerd[1476]: time="2025-05-17T00:30:11.687073060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:11.688279 containerd[1476]: time="2025-05-17T00:30:11.688248930Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.9: active requests=0, bytes read=30355623" May 17 00:30:11.688552 containerd[1476]: time="2025-05-17T00:30:11.688502970Z" level=info msg="ImageCreate event name:\"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:11.689850 containerd[1476]: time="2025-05-17T00:30:11.689832750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:11.690900 containerd[1476]: time="2025-05-17T00:30:11.690398460Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.9\" with image id \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\", repo tag \"registry.k8s.io/kube-proxy:v1.31.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\", size \"30354642\" in 1.35615325s" May 17 00:30:11.690900 containerd[1476]: time="2025-05-17T00:30:11.690455530Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\"" May 17 00:30:11.691033 containerd[1476]: time="2025-05-17T00:30:11.691006700Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 17 00:30:12.339755 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1667022342.mount: Deactivated successfully. May 17 00:30:12.860625 containerd[1476]: time="2025-05-17T00:30:12.860538220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:12.861616 containerd[1476]: time="2025-05-17T00:30:12.861560830Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 17 00:30:12.861902 containerd[1476]: time="2025-05-17T00:30:12.861866770Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:12.863887 containerd[1476]: time="2025-05-17T00:30:12.863869930Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:12.865167 containerd[1476]: time="2025-05-17T00:30:12.864710320Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.17367593s" May 17 00:30:12.865167 containerd[1476]: time="2025-05-17T00:30:12.864738940Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 17 00:30:12.865374 containerd[1476]: time="2025-05-17T00:30:12.865353030Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 00:30:13.451769 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2918722388.mount: Deactivated successfully. May 17 00:30:13.457117 containerd[1476]: time="2025-05-17T00:30:13.456500000Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:13.457117 containerd[1476]: time="2025-05-17T00:30:13.457085280Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 17 00:30:13.457493 containerd[1476]: time="2025-05-17T00:30:13.457457360Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:13.459250 containerd[1476]: time="2025-05-17T00:30:13.459212690Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:13.460467 containerd[1476]: time="2025-05-17T00:30:13.459962440Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 594.58338ms" May 17 00:30:13.460467 containerd[1476]: time="2025-05-17T00:30:13.460018320Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 17 00:30:13.464740 containerd[1476]: time="2025-05-17T00:30:13.464522070Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 17 00:30:14.110177 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3812088247.mount: Deactivated successfully. May 17 00:30:15.334903 containerd[1476]: time="2025-05-17T00:30:15.334830420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:15.336031 containerd[1476]: time="2025-05-17T00:30:15.335857990Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" May 17 00:30:15.338338 containerd[1476]: time="2025-05-17T00:30:15.336472270Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:15.341394 containerd[1476]: time="2025-05-17T00:30:15.339573000Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:15.341394 containerd[1476]: time="2025-05-17T00:30:15.340906000Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 1.87634833s" May 17 00:30:15.341394 containerd[1476]: time="2025-05-17T00:30:15.340942680Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 17 00:30:17.110844 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 17 00:30:17.119576 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:30:17.132178 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 17 00:30:17.132262 systemd[1]: kubelet.service: Failed with result 'signal'. May 17 00:30:17.132642 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:30:17.144644 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:30:17.168493 systemd[1]: Reloading requested from client PID 2071 ('systemctl') (unit session-7.scope)... May 17 00:30:17.168564 systemd[1]: Reloading... May 17 00:30:17.308457 zram_generator::config[2114]: No configuration found. May 17 00:30:17.390402 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:30:17.442705 systemd[1]: Reloading finished in 273 ms. May 17 00:30:17.487640 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 17 00:30:17.487723 systemd[1]: kubelet.service: Failed with result 'signal'. May 17 00:30:17.487937 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:30:17.493657 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:30:17.625461 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:30:17.629352 (kubelet)[2165]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:30:17.662034 kubelet[2165]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:30:17.662390 kubelet[2165]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:30:17.662455 kubelet[2165]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:30:17.662562 kubelet[2165]: I0517 00:30:17.662536 2165 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:30:17.820462 kubelet[2165]: I0517 00:30:17.820406 2165 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:30:17.820622 kubelet[2165]: I0517 00:30:17.820609 2165 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:30:17.820877 kubelet[2165]: I0517 00:30:17.820865 2165 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:30:17.840389 kubelet[2165]: I0517 00:30:17.840374 2165 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:30:17.840669 kubelet[2165]: E0517 00:30:17.840633 2165 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.232.0.241:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.232.0.241:6443: connect: connection refused" logger="UnhandledError" May 17 00:30:17.846358 kubelet[2165]: E0517 00:30:17.846310 2165 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:30:17.846358 kubelet[2165]: I0517 00:30:17.846356 2165 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:30:17.850089 kubelet[2165]: I0517 00:30:17.850065 2165 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:30:17.850666 kubelet[2165]: I0517 00:30:17.850642 2165 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:30:17.850786 kubelet[2165]: I0517 00:30:17.850755 2165 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:30:17.850900 kubelet[2165]: I0517 00:30:17.850778 2165 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-232-0-241","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:30:17.850971 kubelet[2165]: I0517 00:30:17.850902 2165 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:30:17.850971 kubelet[2165]: I0517 00:30:17.850910 2165 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:30:17.851006 kubelet[2165]: I0517 00:30:17.850993 2165 state_mem.go:36] "Initialized new in-memory state store" May 17 00:30:17.853484 kubelet[2165]: I0517 00:30:17.853152 2165 kubelet.go:408] "Attempting to sync node with API server" May 17 00:30:17.853484 kubelet[2165]: I0517 00:30:17.853167 2165 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:30:17.853484 kubelet[2165]: I0517 00:30:17.853192 2165 kubelet.go:314] "Adding apiserver pod source" May 17 00:30:17.853484 kubelet[2165]: I0517 00:30:17.853205 2165 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:30:17.857284 kubelet[2165]: W0517 00:30:17.857239 2165 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.232.0.241:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-232-0-241&limit=500&resourceVersion=0": dial tcp 172.232.0.241:6443: connect: connection refused May 17 00:30:17.857369 kubelet[2165]: E0517 00:30:17.857344 2165 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.232.0.241:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-232-0-241&limit=500&resourceVersion=0\": dial tcp 172.232.0.241:6443: connect: connection refused" logger="UnhandledError" May 17 00:30:17.857455 kubelet[2165]: I0517 00:30:17.857418 2165 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:30:17.858748 kubelet[2165]: I0517 00:30:17.857706 2165 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:30:17.858748 kubelet[2165]: W0517 00:30:17.858151 2165 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:30:17.859629 kubelet[2165]: W0517 00:30:17.859600 2165 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.232.0.241:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.232.0.241:6443: connect: connection refused May 17 00:30:17.859695 kubelet[2165]: E0517 00:30:17.859630 2165 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.232.0.241:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.232.0.241:6443: connect: connection refused" logger="UnhandledError" May 17 00:30:17.861417 kubelet[2165]: I0517 00:30:17.860455 2165 server.go:1274] "Started kubelet" May 17 00:30:17.861417 kubelet[2165]: I0517 00:30:17.861290 2165 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:30:17.864828 kubelet[2165]: E0517 00:30:17.863705 2165 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.232.0.241:6443/api/v1/namespaces/default/events\": dial tcp 172.232.0.241:6443: connect: connection refused" event="&Event{ObjectMeta:{172-232-0-241.18402913333bf5a2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-232-0-241,UID:172-232-0-241,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-232-0-241,},FirstTimestamp:2025-05-17 00:30:17.86043741 +0000 UTC m=+0.227558241,LastTimestamp:2025-05-17 00:30:17.86043741 +0000 UTC m=+0.227558241,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-232-0-241,}" May 17 00:30:17.867061 kubelet[2165]: I0517 00:30:17.866710 2165 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:30:17.867061 kubelet[2165]: I0517 00:30:17.866787 2165 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:30:17.867061 kubelet[2165]: E0517 00:30:17.866845 2165 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-232-0-241\" not found" May 17 00:30:17.867450 kubelet[2165]: I0517 00:30:17.867420 2165 server.go:449] "Adding debug handlers to kubelet server" May 17 00:30:17.868751 kubelet[2165]: I0517 00:30:17.868731 2165 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:30:17.868787 kubelet[2165]: I0517 00:30:17.868779 2165 reconciler.go:26] "Reconciler: start to sync state" May 17 00:30:17.871204 kubelet[2165]: I0517 00:30:17.871183 2165 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:30:17.871448 kubelet[2165]: I0517 00:30:17.871389 2165 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:30:17.872480 kubelet[2165]: I0517 00:30:17.871600 2165 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:30:17.872480 kubelet[2165]: E0517 00:30:17.871687 2165 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.232.0.241:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-232-0-241?timeout=10s\": dial tcp 172.232.0.241:6443: connect: connection refused" interval="200ms" May 17 00:30:17.872480 kubelet[2165]: I0517 00:30:17.871866 2165 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:30:17.873629 kubelet[2165]: I0517 00:30:17.873616 2165 factory.go:221] Registration of the containerd container factory successfully May 17 00:30:17.873687 kubelet[2165]: I0517 00:30:17.873679 2165 factory.go:221] Registration of the systemd container factory successfully May 17 00:30:17.877763 kubelet[2165]: W0517 00:30:17.877735 2165 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.232.0.241:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.232.0.241:6443: connect: connection refused May 17 00:30:17.877839 kubelet[2165]: E0517 00:30:17.877820 2165 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.232.0.241:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.232.0.241:6443: connect: connection refused" logger="UnhandledError" May 17 00:30:17.878879 kubelet[2165]: I0517 00:30:17.878856 2165 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:30:17.879764 kubelet[2165]: I0517 00:30:17.879746 2165 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:30:17.879764 kubelet[2165]: I0517 00:30:17.879764 2165 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:30:17.879865 kubelet[2165]: I0517 00:30:17.879778 2165 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:30:17.879865 kubelet[2165]: E0517 00:30:17.879807 2165 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:30:17.889090 kubelet[2165]: W0517 00:30:17.889059 2165 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.232.0.241:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.232.0.241:6443: connect: connection refused May 17 00:30:17.889144 kubelet[2165]: E0517 00:30:17.889091 2165 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.232.0.241:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.232.0.241:6443: connect: connection refused" logger="UnhandledError" May 17 00:30:17.892259 kubelet[2165]: E0517 00:30:17.891190 2165 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:30:17.894877 kubelet[2165]: I0517 00:30:17.894866 2165 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:30:17.894955 kubelet[2165]: I0517 00:30:17.894946 2165 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:30:17.895012 kubelet[2165]: I0517 00:30:17.894992 2165 state_mem.go:36] "Initialized new in-memory state store" May 17 00:30:17.896282 kubelet[2165]: I0517 00:30:17.896272 2165 policy_none.go:49] "None policy: Start" May 17 00:30:17.896684 kubelet[2165]: I0517 00:30:17.896674 2165 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:30:17.896777 kubelet[2165]: I0517 00:30:17.896770 2165 state_mem.go:35] "Initializing new in-memory state store" May 17 00:30:17.901530 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 17 00:30:17.915771 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 17 00:30:17.918266 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 17 00:30:17.933318 kubelet[2165]: I0517 00:30:17.932986 2165 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:30:17.933318 kubelet[2165]: I0517 00:30:17.933119 2165 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:30:17.933318 kubelet[2165]: I0517 00:30:17.933127 2165 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:30:17.933318 kubelet[2165]: I0517 00:30:17.933265 2165 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:30:17.934636 kubelet[2165]: E0517 00:30:17.934604 2165 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-232-0-241\" not found" May 17 00:30:17.987134 systemd[1]: Created slice kubepods-burstable-poda8b484856adcf367bf17984a3d3a47ac.slice - libcontainer container kubepods-burstable-poda8b484856adcf367bf17984a3d3a47ac.slice. May 17 00:30:18.004937 systemd[1]: Created slice kubepods-burstable-poda1172aebec334bf55bcc4ccb89f46ca5.slice - libcontainer container kubepods-burstable-poda1172aebec334bf55bcc4ccb89f46ca5.slice. May 17 00:30:18.008579 systemd[1]: Created slice kubepods-burstable-pod965479e5150ff10254950d9fc6e90e3c.slice - libcontainer container kubepods-burstable-pod965479e5150ff10254950d9fc6e90e3c.slice. May 17 00:30:18.034783 kubelet[2165]: I0517 00:30:18.034736 2165 kubelet_node_status.go:72] "Attempting to register node" node="172-232-0-241" May 17 00:30:18.035073 kubelet[2165]: E0517 00:30:18.035043 2165 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.232.0.241:6443/api/v1/nodes\": dial tcp 172.232.0.241:6443: connect: connection refused" node="172-232-0-241" May 17 00:30:18.072454 kubelet[2165]: E0517 00:30:18.072419 2165 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.232.0.241:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-232-0-241?timeout=10s\": dial tcp 172.232.0.241:6443: connect: connection refused" interval="400ms" May 17 00:30:18.169455 kubelet[2165]: I0517 00:30:18.169345 2165 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a8b484856adcf367bf17984a3d3a47ac-ca-certs\") pod \"kube-apiserver-172-232-0-241\" (UID: \"a8b484856adcf367bf17984a3d3a47ac\") " pod="kube-system/kube-apiserver-172-232-0-241" May 17 00:30:18.169455 kubelet[2165]: I0517 00:30:18.169418 2165 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1172aebec334bf55bcc4ccb89f46ca5-kubeconfig\") pod \"kube-controller-manager-172-232-0-241\" (UID: \"a1172aebec334bf55bcc4ccb89f46ca5\") " pod="kube-system/kube-controller-manager-172-232-0-241" May 17 00:30:18.169534 kubelet[2165]: I0517 00:30:18.169458 2165 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/965479e5150ff10254950d9fc6e90e3c-kubeconfig\") pod \"kube-scheduler-172-232-0-241\" (UID: \"965479e5150ff10254950d9fc6e90e3c\") " pod="kube-system/kube-scheduler-172-232-0-241" May 17 00:30:18.169534 kubelet[2165]: I0517 00:30:18.169475 2165 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a8b484856adcf367bf17984a3d3a47ac-k8s-certs\") pod \"kube-apiserver-172-232-0-241\" (UID: \"a8b484856adcf367bf17984a3d3a47ac\") " pod="kube-system/kube-apiserver-172-232-0-241" May 17 00:30:18.169534 kubelet[2165]: I0517 00:30:18.169501 2165 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a8b484856adcf367bf17984a3d3a47ac-usr-share-ca-certificates\") pod \"kube-apiserver-172-232-0-241\" (UID: \"a8b484856adcf367bf17984a3d3a47ac\") " pod="kube-system/kube-apiserver-172-232-0-241" May 17 00:30:18.169534 kubelet[2165]: I0517 00:30:18.169516 2165 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a1172aebec334bf55bcc4ccb89f46ca5-ca-certs\") pod \"kube-controller-manager-172-232-0-241\" (UID: \"a1172aebec334bf55bcc4ccb89f46ca5\") " pod="kube-system/kube-controller-manager-172-232-0-241" May 17 00:30:18.169534 kubelet[2165]: I0517 00:30:18.169532 2165 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a1172aebec334bf55bcc4ccb89f46ca5-flexvolume-dir\") pod \"kube-controller-manager-172-232-0-241\" (UID: \"a1172aebec334bf55bcc4ccb89f46ca5\") " pod="kube-system/kube-controller-manager-172-232-0-241" May 17 00:30:18.169618 kubelet[2165]: I0517 00:30:18.169549 2165 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a1172aebec334bf55bcc4ccb89f46ca5-k8s-certs\") pod \"kube-controller-manager-172-232-0-241\" (UID: \"a1172aebec334bf55bcc4ccb89f46ca5\") " pod="kube-system/kube-controller-manager-172-232-0-241" May 17 00:30:18.169618 kubelet[2165]: I0517 00:30:18.169562 2165 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a1172aebec334bf55bcc4ccb89f46ca5-usr-share-ca-certificates\") pod \"kube-controller-manager-172-232-0-241\" (UID: \"a1172aebec334bf55bcc4ccb89f46ca5\") " pod="kube-system/kube-controller-manager-172-232-0-241" May 17 00:30:18.237388 kubelet[2165]: I0517 00:30:18.237342 2165 kubelet_node_status.go:72] "Attempting to register node" node="172-232-0-241" May 17 00:30:18.237699 kubelet[2165]: E0517 00:30:18.237665 2165 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.232.0.241:6443/api/v1/nodes\": dial tcp 172.232.0.241:6443: connect: connection refused" node="172-232-0-241" May 17 00:30:18.302436 kubelet[2165]: E0517 00:30:18.302403 2165 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:30:18.303280 containerd[1476]: time="2025-05-17T00:30:18.303201560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-232-0-241,Uid:a8b484856adcf367bf17984a3d3a47ac,Namespace:kube-system,Attempt:0,}" May 17 00:30:18.306885 kubelet[2165]: E0517 00:30:18.306782 2165 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:30:18.307660 containerd[1476]: time="2025-05-17T00:30:18.307604660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-232-0-241,Uid:a1172aebec334bf55bcc4ccb89f46ca5,Namespace:kube-system,Attempt:0,}" May 17 00:30:18.310826 kubelet[2165]: E0517 00:30:18.310783 2165 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:30:18.311493 containerd[1476]: time="2025-05-17T00:30:18.311459470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-232-0-241,Uid:965479e5150ff10254950d9fc6e90e3c,Namespace:kube-system,Attempt:0,}" May 17 00:30:18.473047 kubelet[2165]: E0517 00:30:18.472934 2165 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.232.0.241:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-232-0-241?timeout=10s\": dial tcp 172.232.0.241:6443: connect: connection refused" interval="800ms" May 17 00:30:18.639806 kubelet[2165]: I0517 00:30:18.639554 2165 kubelet_node_status.go:72] "Attempting to register node" node="172-232-0-241" May 17 00:30:18.639899 kubelet[2165]: E0517 00:30:18.639869 2165 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.232.0.241:6443/api/v1/nodes\": dial tcp 172.232.0.241:6443: connect: connection refused" node="172-232-0-241" May 17 00:30:18.733896 kubelet[2165]: W0517 00:30:18.733795 2165 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.232.0.241:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-232-0-241&limit=500&resourceVersion=0": dial tcp 172.232.0.241:6443: connect: connection refused May 17 00:30:18.733896 kubelet[2165]: E0517 00:30:18.733866 2165 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.232.0.241:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-232-0-241&limit=500&resourceVersion=0\": dial tcp 172.232.0.241:6443: connect: connection refused" logger="UnhandledError" May 17 00:30:18.879147 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1078326816.mount: Deactivated successfully. May 17 00:30:18.881101 kubelet[2165]: W0517 00:30:18.881022 2165 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.232.0.241:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.232.0.241:6443: connect: connection refused May 17 00:30:18.881101 kubelet[2165]: E0517 00:30:18.881077 2165 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.232.0.241:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.232.0.241:6443: connect: connection refused" logger="UnhandledError" May 17 00:30:18.881171 containerd[1476]: time="2025-05-17T00:30:18.881065950Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:30:18.882005 containerd[1476]: time="2025-05-17T00:30:18.881977060Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 17 00:30:18.882544 containerd[1476]: time="2025-05-17T00:30:18.882519180Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:30:18.884189 containerd[1476]: time="2025-05-17T00:30:18.884045740Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:30:18.884189 containerd[1476]: time="2025-05-17T00:30:18.884099990Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:30:18.884900 containerd[1476]: time="2025-05-17T00:30:18.884876990Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:30:18.886393 containerd[1476]: time="2025-05-17T00:30:18.885580610Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:30:18.886393 containerd[1476]: time="2025-05-17T00:30:18.886011890Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 574.49794ms" May 17 00:30:18.886393 containerd[1476]: time="2025-05-17T00:30:18.886362940Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:30:18.888256 containerd[1476]: time="2025-05-17T00:30:18.888230020Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 584.93865ms" May 17 00:30:18.890392 containerd[1476]: time="2025-05-17T00:30:18.890335520Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 582.62221ms" May 17 00:30:18.941260 kubelet[2165]: W0517 00:30:18.941127 2165 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.232.0.241:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.232.0.241:6443: connect: connection refused May 17 00:30:18.941260 kubelet[2165]: E0517 00:30:18.941178 2165 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.232.0.241:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.232.0.241:6443: connect: connection refused" logger="UnhandledError" May 17 00:30:18.969132 containerd[1476]: time="2025-05-17T00:30:18.968971840Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:30:18.969132 containerd[1476]: time="2025-05-17T00:30:18.969019550Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:30:18.969132 containerd[1476]: time="2025-05-17T00:30:18.969030590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:30:18.971415 containerd[1476]: time="2025-05-17T00:30:18.970518370Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:30:18.971415 containerd[1476]: time="2025-05-17T00:30:18.971105330Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:30:18.971415 containerd[1476]: time="2025-05-17T00:30:18.971114720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:30:18.971415 containerd[1476]: time="2025-05-17T00:30:18.971027750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:30:18.972690 containerd[1476]: time="2025-05-17T00:30:18.972541060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:30:18.974291 containerd[1476]: time="2025-05-17T00:30:18.974145120Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:30:18.974291 containerd[1476]: time="2025-05-17T00:30:18.974187160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:30:18.974291 containerd[1476]: time="2025-05-17T00:30:18.974198230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:30:18.974291 containerd[1476]: time="2025-05-17T00:30:18.974247540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:30:18.999573 systemd[1]: Started cri-containerd-96599847f29948b6c76a446cc9ddb8ccc0f56416e28624bebca08a3402e0ebbf.scope - libcontainer container 96599847f29948b6c76a446cc9ddb8ccc0f56416e28624bebca08a3402e0ebbf. May 17 00:30:19.000971 systemd[1]: Started cri-containerd-9fbc400e6d459b6bb237cefb4830068825b112b968fde7b87f807a385f63aaf8.scope - libcontainer container 9fbc400e6d459b6bb237cefb4830068825b112b968fde7b87f807a385f63aaf8. May 17 00:30:19.004938 systemd[1]: Started cri-containerd-b9bd10cbc2a73330cd8684861a4effbd816af55087f012b91d98a2fdfe78b774.scope - libcontainer container b9bd10cbc2a73330cd8684861a4effbd816af55087f012b91d98a2fdfe78b774. May 17 00:30:19.037209 containerd[1476]: time="2025-05-17T00:30:19.037121300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-232-0-241,Uid:a8b484856adcf367bf17984a3d3a47ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"9fbc400e6d459b6bb237cefb4830068825b112b968fde7b87f807a385f63aaf8\"" May 17 00:30:19.039216 kubelet[2165]: E0517 00:30:19.039028 2165 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:30:19.041356 containerd[1476]: time="2025-05-17T00:30:19.041304780Z" level=info msg="CreateContainer within sandbox \"9fbc400e6d459b6bb237cefb4830068825b112b968fde7b87f807a385f63aaf8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 00:30:19.050770 containerd[1476]: time="2025-05-17T00:30:19.050068340Z" level=info msg="CreateContainer within sandbox \"9fbc400e6d459b6bb237cefb4830068825b112b968fde7b87f807a385f63aaf8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fe4a4ee78765cc75dc68d6c7bc351048d079e1c45f85295cee67491a6b1899d1\"" May 17 00:30:19.051655 containerd[1476]: time="2025-05-17T00:30:19.051350470Z" level=info msg="StartContainer for \"fe4a4ee78765cc75dc68d6c7bc351048d079e1c45f85295cee67491a6b1899d1\"" May 17 00:30:19.062752 containerd[1476]: time="2025-05-17T00:30:19.062695820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-232-0-241,Uid:a1172aebec334bf55bcc4ccb89f46ca5,Namespace:kube-system,Attempt:0,} returns sandbox id \"96599847f29948b6c76a446cc9ddb8ccc0f56416e28624bebca08a3402e0ebbf\"" May 17 00:30:19.063598 kubelet[2165]: E0517 00:30:19.063520 2165 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:30:19.064962 containerd[1476]: time="2025-05-17T00:30:19.064936290Z" level=info msg="CreateContainer within sandbox \"96599847f29948b6c76a446cc9ddb8ccc0f56416e28624bebca08a3402e0ebbf\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 00:30:19.072936 containerd[1476]: time="2025-05-17T00:30:19.072907720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-232-0-241,Uid:965479e5150ff10254950d9fc6e90e3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"b9bd10cbc2a73330cd8684861a4effbd816af55087f012b91d98a2fdfe78b774\"" May 17 00:30:19.073449 kubelet[2165]: E0517 00:30:19.073412 2165 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:30:19.076182 containerd[1476]: time="2025-05-17T00:30:19.076104460Z" level=info msg="CreateContainer within sandbox \"b9bd10cbc2a73330cd8684861a4effbd816af55087f012b91d98a2fdfe78b774\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 00:30:19.082531 containerd[1476]: time="2025-05-17T00:30:19.082512170Z" level=info msg="CreateContainer within sandbox \"96599847f29948b6c76a446cc9ddb8ccc0f56416e28624bebca08a3402e0ebbf\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e7bb84932bf16ccbf57ba280e00e0923905a85a186bc3a80e11210e223bff02e\"" May 17 00:30:19.083392 containerd[1476]: time="2025-05-17T00:30:19.083376410Z" level=info msg="StartContainer for \"e7bb84932bf16ccbf57ba280e00e0923905a85a186bc3a80e11210e223bff02e\"" May 17 00:30:19.086108 systemd[1]: Started cri-containerd-fe4a4ee78765cc75dc68d6c7bc351048d079e1c45f85295cee67491a6b1899d1.scope - libcontainer container fe4a4ee78765cc75dc68d6c7bc351048d079e1c45f85295cee67491a6b1899d1. May 17 00:30:19.086391 containerd[1476]: time="2025-05-17T00:30:19.086332780Z" level=info msg="CreateContainer within sandbox \"b9bd10cbc2a73330cd8684861a4effbd816af55087f012b91d98a2fdfe78b774\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c42a732639eec049f16315315f878919bbfe0d629f65d528323f34a20117d675\"" May 17 00:30:19.086574 containerd[1476]: time="2025-05-17T00:30:19.086559560Z" level=info msg="StartContainer for \"c42a732639eec049f16315315f878919bbfe0d629f65d528323f34a20117d675\"" May 17 00:30:19.117606 systemd[1]: Started cri-containerd-c42a732639eec049f16315315f878919bbfe0d629f65d528323f34a20117d675.scope - libcontainer container c42a732639eec049f16315315f878919bbfe0d629f65d528323f34a20117d675. May 17 00:30:19.128648 systemd[1]: Started cri-containerd-e7bb84932bf16ccbf57ba280e00e0923905a85a186bc3a80e11210e223bff02e.scope - libcontainer container e7bb84932bf16ccbf57ba280e00e0923905a85a186bc3a80e11210e223bff02e. May 17 00:30:19.137807 containerd[1476]: time="2025-05-17T00:30:19.137608130Z" level=info msg="StartContainer for \"fe4a4ee78765cc75dc68d6c7bc351048d079e1c45f85295cee67491a6b1899d1\" returns successfully" May 17 00:30:19.182526 containerd[1476]: time="2025-05-17T00:30:19.182238640Z" level=info msg="StartContainer for \"e7bb84932bf16ccbf57ba280e00e0923905a85a186bc3a80e11210e223bff02e\" returns successfully" May 17 00:30:19.193030 containerd[1476]: time="2025-05-17T00:30:19.192828480Z" level=info msg="StartContainer for \"c42a732639eec049f16315315f878919bbfe0d629f65d528323f34a20117d675\" returns successfully" May 17 00:30:19.442746 kubelet[2165]: I0517 00:30:19.442160 2165 kubelet_node_status.go:72] "Attempting to register node" node="172-232-0-241" May 17 00:30:19.898076 kubelet[2165]: E0517 00:30:19.897876 2165 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:30:19.901791 kubelet[2165]: E0517 00:30:19.901777 2165 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:30:19.909810 kubelet[2165]: E0517 00:30:19.909778 2165 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:30:20.146155 kubelet[2165]: E0517 00:30:20.146100 2165 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-232-0-241\" not found" node="172-232-0-241" May 17 00:30:20.213084 kubelet[2165]: I0517 00:30:20.212878 2165 kubelet_node_status.go:75] "Successfully registered node" node="172-232-0-241" May 17 00:30:20.213084 kubelet[2165]: E0517 00:30:20.212907 2165 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172-232-0-241\": node \"172-232-0-241\" not found" May 17 00:30:20.227724 kubelet[2165]: E0517 00:30:20.227689 2165 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-232-0-241\" not found" May 17 00:30:20.328290 kubelet[2165]: E0517 00:30:20.327960 2165 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-232-0-241\" not found" May 17 00:30:20.428909 kubelet[2165]: E0517 00:30:20.428870 2165 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-232-0-241\" not found" May 17 00:30:20.529461 kubelet[2165]: E0517 00:30:20.529359 2165 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-232-0-241\" not found" May 17 00:30:20.630103 kubelet[2165]: E0517 00:30:20.630033 2165 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-232-0-241\" not found" May 17 00:30:20.730619 kubelet[2165]: E0517 00:30:20.730578 2165 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-232-0-241\" not found" May 17 00:30:20.831621 kubelet[2165]: E0517 00:30:20.831546 2165 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-232-0-241\" not found" May 17 00:30:20.912438 kubelet[2165]: E0517 00:30:20.912390 2165 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:30:20.932217 kubelet[2165]: E0517 00:30:20.932138 2165 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-232-0-241\" not found" May 17 00:30:21.033018 kubelet[2165]: E0517 00:30:21.032975 2165 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-232-0-241\" not found" May 17 00:30:21.133893 kubelet[2165]: E0517 00:30:21.133837 2165 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-232-0-241\" not found" May 17 00:30:21.234554 kubelet[2165]: E0517 00:30:21.234501 2165 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-232-0-241\" not found" May 17 00:30:21.334927 kubelet[2165]: E0517 00:30:21.334862 2165 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-232-0-241\" not found" May 17 00:30:21.435159 kubelet[2165]: E0517 00:30:21.435005 2165 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-232-0-241\" not found" May 17 00:30:21.862145 kubelet[2165]: I0517 00:30:21.861829 2165 apiserver.go:52] "Watching apiserver" May 17 00:30:21.869098 kubelet[2165]: I0517 00:30:21.869035 2165 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:30:22.072994 systemd[1]: Reloading requested from client PID 2438 ('systemctl') (unit session-7.scope)... May 17 00:30:22.073018 systemd[1]: Reloading... May 17 00:30:22.179566 zram_generator::config[2479]: No configuration found. May 17 00:30:22.280325 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:30:22.356667 systemd[1]: Reloading finished in 283 ms. May 17 00:30:22.397081 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:30:22.413730 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:30:22.413991 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:30:22.419766 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:30:22.584632 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:30:22.590556 (kubelet)[2529]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:30:22.647787 kubelet[2529]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:30:22.647787 kubelet[2529]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:30:22.647787 kubelet[2529]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:30:22.648052 kubelet[2529]: I0517 00:30:22.647874 2529 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:30:22.657456 kubelet[2529]: I0517 00:30:22.656578 2529 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:30:22.657456 kubelet[2529]: I0517 00:30:22.656599 2529 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:30:22.657456 kubelet[2529]: I0517 00:30:22.656810 2529 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:30:22.658073 kubelet[2529]: I0517 00:30:22.658059 2529 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 17 00:30:22.664124 kubelet[2529]: I0517 00:30:22.662529 2529 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:30:22.665586 kubelet[2529]: E0517 00:30:22.665567 2529 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:30:22.665646 kubelet[2529]: I0517 00:30:22.665635 2529 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:30:22.668494 kubelet[2529]: I0517 00:30:22.668483 2529 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:30:22.668636 kubelet[2529]: I0517 00:30:22.668626 2529 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:30:22.668823 kubelet[2529]: I0517 00:30:22.668802 2529 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:30:22.668990 kubelet[2529]: I0517 00:30:22.668857 2529 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-232-0-241","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:30:22.669092 kubelet[2529]: I0517 00:30:22.669083 2529 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:30:22.669136 kubelet[2529]: I0517 00:30:22.669128 2529 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:30:22.669190 kubelet[2529]: I0517 00:30:22.669183 2529 state_mem.go:36] "Initialized new in-memory state store" May 17 00:30:22.669313 kubelet[2529]: I0517 00:30:22.669304 2529 kubelet.go:408] "Attempting to sync node with API server" May 17 00:30:22.669359 kubelet[2529]: I0517 00:30:22.669352 2529 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:30:22.669422 kubelet[2529]: I0517 00:30:22.669415 2529 kubelet.go:314] "Adding apiserver pod source" May 17 00:30:22.669595 kubelet[2529]: I0517 00:30:22.669585 2529 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:30:22.674364 kubelet[2529]: I0517 00:30:22.674346 2529 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:30:22.674820 kubelet[2529]: I0517 00:30:22.674803 2529 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:30:22.676044 kubelet[2529]: I0517 00:30:22.676026 2529 server.go:1274] "Started kubelet" May 17 00:30:22.677891 kubelet[2529]: I0517 00:30:22.677879 2529 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:30:22.683120 kubelet[2529]: E0517 00:30:22.683106 2529 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:30:22.683174 kubelet[2529]: I0517 00:30:22.679201 2529 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:30:22.683844 kubelet[2529]: I0517 00:30:22.683832 2529 server.go:449] "Adding debug handlers to kubelet server" May 17 00:30:22.684823 kubelet[2529]: I0517 00:30:22.679589 2529 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:30:22.685323 kubelet[2529]: I0517 00:30:22.685016 2529 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:30:22.686048 kubelet[2529]: I0517 00:30:22.678451 2529 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:30:22.686272 kubelet[2529]: I0517 00:30:22.686167 2529 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:30:22.687036 kubelet[2529]: I0517 00:30:22.686578 2529 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:30:22.687373 kubelet[2529]: I0517 00:30:22.687363 2529 reconciler.go:26] "Reconciler: start to sync state" May 17 00:30:22.689991 kubelet[2529]: I0517 00:30:22.689979 2529 factory.go:221] Registration of the containerd container factory successfully May 17 00:30:22.690119 kubelet[2529]: I0517 00:30:22.690110 2529 factory.go:221] Registration of the systemd container factory successfully May 17 00:30:22.690215 kubelet[2529]: I0517 00:30:22.690201 2529 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:30:22.690438 kubelet[2529]: I0517 00:30:22.690399 2529 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:30:22.691443 kubelet[2529]: I0517 00:30:22.691411 2529 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:30:22.691469 kubelet[2529]: I0517 00:30:22.691456 2529 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:30:22.691497 kubelet[2529]: I0517 00:30:22.691469 2529 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:30:22.691532 kubelet[2529]: E0517 00:30:22.691515 2529 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:30:22.738071 kubelet[2529]: I0517 00:30:22.738042 2529 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:30:22.738071 kubelet[2529]: I0517 00:30:22.738059 2529 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:30:22.738071 kubelet[2529]: I0517 00:30:22.738073 2529 state_mem.go:36] "Initialized new in-memory state store" May 17 00:30:22.738234 kubelet[2529]: I0517 00:30:22.738170 2529 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 00:30:22.738234 kubelet[2529]: I0517 00:30:22.738178 2529 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 00:30:22.738234 kubelet[2529]: I0517 00:30:22.738194 2529 policy_none.go:49] "None policy: Start" May 17 00:30:22.738638 kubelet[2529]: I0517 00:30:22.738618 2529 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:30:22.738638 kubelet[2529]: I0517 00:30:22.738635 2529 state_mem.go:35] "Initializing new in-memory state store" May 17 00:30:22.738727 kubelet[2529]: I0517 00:30:22.738717 2529 state_mem.go:75] "Updated machine memory state" May 17 00:30:22.741961 kubelet[2529]: I0517 00:30:22.741948 2529 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:30:22.742080 kubelet[2529]: I0517 00:30:22.742070 2529 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:30:22.742112 kubelet[2529]: I0517 00:30:22.742081 2529 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:30:22.742516 kubelet[2529]: I0517 00:30:22.742477 2529 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:30:22.848232 kubelet[2529]: I0517 00:30:22.848137 2529 kubelet_node_status.go:72] "Attempting to register node" node="172-232-0-241" May 17 00:30:22.854118 kubelet[2529]: I0517 00:30:22.853955 2529 kubelet_node_status.go:111] "Node was previously registered" node="172-232-0-241" May 17 00:30:22.854118 kubelet[2529]: I0517 00:30:22.854004 2529 kubelet_node_status.go:75] "Successfully registered node" node="172-232-0-241" May 17 00:30:22.888606 kubelet[2529]: I0517 00:30:22.888569 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a8b484856adcf367bf17984a3d3a47ac-usr-share-ca-certificates\") pod \"kube-apiserver-172-232-0-241\" (UID: \"a8b484856adcf367bf17984a3d3a47ac\") " pod="kube-system/kube-apiserver-172-232-0-241" May 17 00:30:22.888606 kubelet[2529]: I0517 00:30:22.888600 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1172aebec334bf55bcc4ccb89f46ca5-kubeconfig\") pod \"kube-controller-manager-172-232-0-241\" (UID: \"a1172aebec334bf55bcc4ccb89f46ca5\") " pod="kube-system/kube-controller-manager-172-232-0-241" May 17 00:30:22.888821 kubelet[2529]: I0517 00:30:22.888619 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a1172aebec334bf55bcc4ccb89f46ca5-usr-share-ca-certificates\") pod \"kube-controller-manager-172-232-0-241\" (UID: \"a1172aebec334bf55bcc4ccb89f46ca5\") " pod="kube-system/kube-controller-manager-172-232-0-241" May 17 00:30:22.888821 kubelet[2529]: I0517 00:30:22.888641 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a8b484856adcf367bf17984a3d3a47ac-ca-certs\") pod \"kube-apiserver-172-232-0-241\" (UID: \"a8b484856adcf367bf17984a3d3a47ac\") " pod="kube-system/kube-apiserver-172-232-0-241" May 17 00:30:22.888821 kubelet[2529]: I0517 00:30:22.888693 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a8b484856adcf367bf17984a3d3a47ac-k8s-certs\") pod \"kube-apiserver-172-232-0-241\" (UID: \"a8b484856adcf367bf17984a3d3a47ac\") " pod="kube-system/kube-apiserver-172-232-0-241" May 17 00:30:22.888821 kubelet[2529]: I0517 00:30:22.888726 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a1172aebec334bf55bcc4ccb89f46ca5-ca-certs\") pod \"kube-controller-manager-172-232-0-241\" (UID: \"a1172aebec334bf55bcc4ccb89f46ca5\") " pod="kube-system/kube-controller-manager-172-232-0-241" May 17 00:30:22.888821 kubelet[2529]: I0517 00:30:22.888749 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a1172aebec334bf55bcc4ccb89f46ca5-flexvolume-dir\") pod \"kube-controller-manager-172-232-0-241\" (UID: \"a1172aebec334bf55bcc4ccb89f46ca5\") " pod="kube-system/kube-controller-manager-172-232-0-241" May 17 00:30:22.889036 kubelet[2529]: I0517 00:30:22.888768 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a1172aebec334bf55bcc4ccb89f46ca5-k8s-certs\") pod \"kube-controller-manager-172-232-0-241\" (UID: \"a1172aebec334bf55bcc4ccb89f46ca5\") " pod="kube-system/kube-controller-manager-172-232-0-241" May 17 00:30:22.889036 kubelet[2529]: I0517 00:30:22.888790 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/965479e5150ff10254950d9fc6e90e3c-kubeconfig\") pod \"kube-scheduler-172-232-0-241\" (UID: \"965479e5150ff10254950d9fc6e90e3c\") " pod="kube-system/kube-scheduler-172-232-0-241" May 17 00:30:23.099347 kubelet[2529]: E0517 00:30:23.098963 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:30:23.099637 kubelet[2529]: E0517 00:30:23.099610 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:30:23.100175 kubelet[2529]: E0517 00:30:23.099721 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:30:23.679791 kubelet[2529]: I0517 00:30:23.678495 2529 apiserver.go:52] "Watching apiserver" May 17 00:30:23.688473 kubelet[2529]: I0517 00:30:23.688332 2529 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:30:23.728601 kubelet[2529]: E0517 00:30:23.727502 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:30:23.728963 kubelet[2529]: E0517 00:30:23.728920 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:30:23.736744 kubelet[2529]: E0517 00:30:23.736716 2529 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-172-232-0-241\" already exists" pod="kube-system/kube-apiserver-172-232-0-241" May 17 00:30:23.737852 kubelet[2529]: E0517 00:30:23.737835 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:30:23.760814 kubelet[2529]: I0517 00:30:23.760762 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-232-0-241" podStartSLOduration=1.76074623 podStartE2EDuration="1.76074623s" podCreationTimestamp="2025-05-17 00:30:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:30:23.75357647 +0000 UTC m=+1.154066271" watchObservedRunningTime="2025-05-17 00:30:23.76074623 +0000 UTC m=+1.161236031" May 17 00:30:23.767893 kubelet[2529]: I0517 00:30:23.767698 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-232-0-241" podStartSLOduration=1.76768124 podStartE2EDuration="1.76768124s" podCreationTimestamp="2025-05-17 00:30:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:30:23.76117021 +0000 UTC m=+1.161660021" watchObservedRunningTime="2025-05-17 00:30:23.76768124 +0000 UTC m=+1.168171051" May 17 00:30:23.767893 kubelet[2529]: I0517 00:30:23.767803 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-232-0-241" podStartSLOduration=1.76779725 podStartE2EDuration="1.76779725s" podCreationTimestamp="2025-05-17 00:30:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:30:23.76763716 +0000 UTC m=+1.168126961" watchObservedRunningTime="2025-05-17 00:30:23.76779725 +0000 UTC m=+1.168287061" May 17 00:30:24.644520 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 17 00:30:24.730466 kubelet[2529]: E0517 00:30:24.730405 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:30:27.283518 kubelet[2529]: E0517 00:30:27.283476 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:30:27.704048 kubelet[2529]: I0517 00:30:27.704013 2529 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 00:30:27.704593 containerd[1476]: time="2025-05-17T00:30:27.704559223Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:30:27.704977 kubelet[2529]: I0517 00:30:27.704736 2529 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 00:30:27.833781 kubelet[2529]: E0517 00:30:27.833748 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:30:28.766873 systemd[1]: Created slice kubepods-besteffort-podd2e5f693_6b01_49bb_9a5f_52d7fbc9dd30.slice - libcontainer container kubepods-besteffort-podd2e5f693_6b01_49bb_9a5f_52d7fbc9dd30.slice. May 17 00:30:28.825916 kubelet[2529]: I0517 00:30:28.825881 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d2e5f693-6b01-49bb-9a5f-52d7fbc9dd30-kube-proxy\") pod \"kube-proxy-g76qm\" (UID: \"d2e5f693-6b01-49bb-9a5f-52d7fbc9dd30\") " pod="kube-system/kube-proxy-g76qm" May 17 00:30:28.825916 kubelet[2529]: I0517 00:30:28.825914 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2e5f693-6b01-49bb-9a5f-52d7fbc9dd30-xtables-lock\") pod \"kube-proxy-g76qm\" (UID: \"d2e5f693-6b01-49bb-9a5f-52d7fbc9dd30\") " pod="kube-system/kube-proxy-g76qm" May 17 00:30:28.826593 kubelet[2529]: I0517 00:30:28.825941 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2e5f693-6b01-49bb-9a5f-52d7fbc9dd30-lib-modules\") pod \"kube-proxy-g76qm\" (UID: \"d2e5f693-6b01-49bb-9a5f-52d7fbc9dd30\") " pod="kube-system/kube-proxy-g76qm" May 17 00:30:28.826593 kubelet[2529]: I0517 00:30:28.825955 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqkd4\" (UniqueName: \"kubernetes.io/projected/d2e5f693-6b01-49bb-9a5f-52d7fbc9dd30-kube-api-access-vqkd4\") pod \"kube-proxy-g76qm\" (UID: \"d2e5f693-6b01-49bb-9a5f-52d7fbc9dd30\") " pod="kube-system/kube-proxy-g76qm" May 17 00:30:28.883100 systemd[1]: Created slice kubepods-besteffort-podbde661f5_f0da_4c43_9dfb_780332faeffd.slice - libcontainer container kubepods-besteffort-podbde661f5_f0da_4c43_9dfb_780332faeffd.slice. May 17 00:30:28.926841 kubelet[2529]: I0517 00:30:28.926806 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bde661f5-f0da-4c43-9dfb-780332faeffd-var-lib-calico\") pod \"tigera-operator-7c5755cdcb-4z654\" (UID: \"bde661f5-f0da-4c43-9dfb-780332faeffd\") " pod="tigera-operator/tigera-operator-7c5755cdcb-4z654" May 17 00:30:28.927516 kubelet[2529]: I0517 00:30:28.926859 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f89df\" (UniqueName: \"kubernetes.io/projected/bde661f5-f0da-4c43-9dfb-780332faeffd-kube-api-access-f89df\") pod \"tigera-operator-7c5755cdcb-4z654\" (UID: \"bde661f5-f0da-4c43-9dfb-780332faeffd\") " pod="tigera-operator/tigera-operator-7c5755cdcb-4z654" May 17 00:30:29.074899 kubelet[2529]: E0517 00:30:29.074771 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:30:29.076242 containerd[1476]: time="2025-05-17T00:30:29.076044030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g76qm,Uid:d2e5f693-6b01-49bb-9a5f-52d7fbc9dd30,Namespace:kube-system,Attempt:0,}" May 17 00:30:29.105339 containerd[1476]: time="2025-05-17T00:30:29.105191413Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:30:29.105339 containerd[1476]: time="2025-05-17T00:30:29.105283775Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:30:29.105339 containerd[1476]: time="2025-05-17T00:30:29.105295485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:30:29.105797 containerd[1476]: time="2025-05-17T00:30:29.105715855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:30:29.129570 systemd[1]: Started cri-containerd-a5f44ede8395f076b1ad0758d862a9c394238bc32289db6a9d259b482f8b0eda.scope - libcontainer container a5f44ede8395f076b1ad0758d862a9c394238bc32289db6a9d259b482f8b0eda. May 17 00:30:29.149649 containerd[1476]: time="2025-05-17T00:30:29.149607913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g76qm,Uid:d2e5f693-6b01-49bb-9a5f-52d7fbc9dd30,Namespace:kube-system,Attempt:0,} returns sandbox id \"a5f44ede8395f076b1ad0758d862a9c394238bc32289db6a9d259b482f8b0eda\"" May 17 00:30:29.150083 kubelet[2529]: E0517 00:30:29.150045 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:30:29.152934 containerd[1476]: time="2025-05-17T00:30:29.152903575Z" level=info msg="CreateContainer within sandbox \"a5f44ede8395f076b1ad0758d862a9c394238bc32289db6a9d259b482f8b0eda\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:30:29.162859 containerd[1476]: time="2025-05-17T00:30:29.162827754Z" level=info msg="CreateContainer within sandbox \"a5f44ede8395f076b1ad0758d862a9c394238bc32289db6a9d259b482f8b0eda\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ffc9e47b80cc928798be0923d93bf218c2e95b4d3342a675a0c945541b545f47\"" May 17 00:30:29.164739 containerd[1476]: time="2025-05-17T00:30:29.163245713Z" level=info msg="StartContainer for \"ffc9e47b80cc928798be0923d93bf218c2e95b4d3342a675a0c945541b545f47\"" May 17 00:30:29.186628 systemd[1]: Started cri-containerd-ffc9e47b80cc928798be0923d93bf218c2e95b4d3342a675a0c945541b545f47.scope - libcontainer container ffc9e47b80cc928798be0923d93bf218c2e95b4d3342a675a0c945541b545f47. May 17 00:30:29.187682 containerd[1476]: time="2025-05-17T00:30:29.187659032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7c5755cdcb-4z654,Uid:bde661f5-f0da-4c43-9dfb-780332faeffd,Namespace:tigera-operator,Attempt:0,}" May 17 00:30:29.214475 containerd[1476]: time="2025-05-17T00:30:29.212962840Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:30:29.214475 containerd[1476]: time="2025-05-17T00:30:29.212999811Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:30:29.214475 containerd[1476]: time="2025-05-17T00:30:29.213008051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:30:29.214475 containerd[1476]: time="2025-05-17T00:30:29.213059822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:30:29.221546 containerd[1476]: time="2025-05-17T00:30:29.221496168Z" level=info msg="StartContainer for \"ffc9e47b80cc928798be0923d93bf218c2e95b4d3342a675a0c945541b545f47\" returns successfully" May 17 00:30:29.233663 systemd[1]: Started cri-containerd-a8efa341dc1a03c6c0be27b1bda5a62bfec8456c6782fa1ca92394284015550c.scope - libcontainer container a8efa341dc1a03c6c0be27b1bda5a62bfec8456c6782fa1ca92394284015550c. May 17 00:30:29.276136 containerd[1476]: time="2025-05-17T00:30:29.276093962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7c5755cdcb-4z654,Uid:bde661f5-f0da-4c43-9dfb-780332faeffd,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a8efa341dc1a03c6c0be27b1bda5a62bfec8456c6782fa1ca92394284015550c\"" May 17 00:30:29.278469 containerd[1476]: time="2025-05-17T00:30:29.278415973Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 17 00:30:29.743231 kubelet[2529]: E0517 00:30:29.742963 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:30:29.751393 kubelet[2529]: I0517 00:30:29.751022 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-g76qm" podStartSLOduration=1.751009876 podStartE2EDuration="1.751009876s" podCreationTimestamp="2025-05-17 00:30:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:30:29.750856463 +0000 UTC m=+7.151346264" watchObservedRunningTime="2025-05-17 00:30:29.751009876 +0000 UTC m=+7.151499677" May 17 00:30:29.754422 kubelet[2529]: E0517 00:30:29.754404 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:30:30.486413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2294233609.mount: Deactivated successfully. May 17 00:30:30.744395 kubelet[2529]: E0517 00:30:30.744182 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:30:31.125754 containerd[1476]: time="2025-05-17T00:30:31.125692864Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:31.126619 containerd[1476]: time="2025-05-17T00:30:31.126569721Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=25055451" May 17 00:30:31.127278 containerd[1476]: time="2025-05-17T00:30:31.127235104Z" level=info msg="ImageCreate event name:\"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:31.130237 containerd[1476]: time="2025-05-17T00:30:31.129039959Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:31.130237 containerd[1476]: time="2025-05-17T00:30:31.129773023Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"25051446\" in 1.851297919s" May 17 00:30:31.130237 containerd[1476]: time="2025-05-17T00:30:31.129852525Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\"" May 17 00:30:31.133026 containerd[1476]: time="2025-05-17T00:30:31.132999956Z" level=info msg="CreateContainer within sandbox \"a8efa341dc1a03c6c0be27b1bda5a62bfec8456c6782fa1ca92394284015550c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 17 00:30:31.156278 containerd[1476]: time="2025-05-17T00:30:31.156252696Z" level=info msg="CreateContainer within sandbox \"a8efa341dc1a03c6c0be27b1bda5a62bfec8456c6782fa1ca92394284015550c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a9e0c16dfb94fb1362936ad2250837f1f8cb87433c34c3d5dedbcacbee99e07a\"" May 17 00:30:31.156831 containerd[1476]: time="2025-05-17T00:30:31.156809657Z" level=info msg="StartContainer for \"a9e0c16dfb94fb1362936ad2250837f1f8cb87433c34c3d5dedbcacbee99e07a\"" May 17 00:30:31.191525 systemd[1]: Started cri-containerd-a9e0c16dfb94fb1362936ad2250837f1f8cb87433c34c3d5dedbcacbee99e07a.scope - libcontainer container a9e0c16dfb94fb1362936ad2250837f1f8cb87433c34c3d5dedbcacbee99e07a. May 17 00:30:31.214266 containerd[1476]: time="2025-05-17T00:30:31.214231660Z" level=info msg="StartContainer for \"a9e0c16dfb94fb1362936ad2250837f1f8cb87433c34c3d5dedbcacbee99e07a\" returns successfully" May 17 00:30:31.756934 kubelet[2529]: I0517 00:30:31.756839 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7c5755cdcb-4z654" podStartSLOduration=1.903308794 podStartE2EDuration="3.756824708s" podCreationTimestamp="2025-05-17 00:30:28 +0000 UTC" firstStartedPulling="2025-05-17 00:30:29.277306399 +0000 UTC m=+6.677796190" lastFinishedPulling="2025-05-17 00:30:31.130822303 +0000 UTC m=+8.531312104" observedRunningTime="2025-05-17 00:30:31.755643615 +0000 UTC m=+9.156133416" watchObservedRunningTime="2025-05-17 00:30:31.756824708 +0000 UTC m=+9.157314499" May 17 00:30:36.368022 sudo[1687]: pam_unix(sudo:session): session closed for user root May 17 00:30:36.426488 sshd[1684]: pam_unix(sshd:session): session closed for user core May 17 00:30:36.431946 systemd-logind[1455]: Session 7 logged out. Waiting for processes to exit. May 17 00:30:36.432984 systemd[1]: sshd@6-172.232.0.241:22-139.178.89.65:37640.service: Deactivated successfully. May 17 00:30:36.436121 systemd[1]: session-7.scope: Deactivated successfully. May 17 00:30:36.437571 systemd[1]: session-7.scope: Consumed 3.520s CPU time, 155.7M memory peak, 0B memory swap peak. May 17 00:30:36.441014 systemd-logind[1455]: Removed session 7. May 17 00:30:37.288640 kubelet[2529]: E0517 00:30:37.288238 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:30:37.837624 kubelet[2529]: E0517 00:30:37.837565 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:30:39.095220 update_engine[1458]: I20250517 00:30:39.095161 1458 update_attempter.cc:509] Updating boot flags... May 17 00:30:39.138914 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2931) May 17 00:30:39.194510 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2935) May 17 00:30:39.263194 systemd[1]: Created slice kubepods-besteffort-pod36f314af_d57b_4d7a_90de_e0fe9fafde81.slice - libcontainer container kubepods-besteffort-pod36f314af_d57b_4d7a_90de_e0fe9fafde81.slice. May 17 00:30:39.290487 kubelet[2529]: I0517 00:30:39.290381 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36f314af-d57b-4d7a-90de-e0fe9fafde81-tigera-ca-bundle\") pod \"calico-typha-64f847d955-rq9qk\" (UID: \"36f314af-d57b-4d7a-90de-e0fe9fafde81\") " pod="calico-system/calico-typha-64f847d955-rq9qk" May 17 00:30:39.290487 kubelet[2529]: I0517 00:30:39.290414 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bccpn\" (UniqueName: \"kubernetes.io/projected/36f314af-d57b-4d7a-90de-e0fe9fafde81-kube-api-access-bccpn\") pod \"calico-typha-64f847d955-rq9qk\" (UID: \"36f314af-d57b-4d7a-90de-e0fe9fafde81\") " pod="calico-system/calico-typha-64f847d955-rq9qk" May 17 00:30:39.290487 kubelet[2529]: I0517 00:30:39.290446 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/36f314af-d57b-4d7a-90de-e0fe9fafde81-typha-certs\") pod \"calico-typha-64f847d955-rq9qk\" (UID: \"36f314af-d57b-4d7a-90de-e0fe9fafde81\") " pod="calico-system/calico-typha-64f847d955-rq9qk" May 17 00:30:39.556327 systemd[1]: Created slice kubepods-besteffort-pode5185284_76b5_43ed_b127_2d0fa638e96d.slice - libcontainer container kubepods-besteffort-pode5185284_76b5_43ed_b127_2d0fa638e96d.slice. May 17 00:30:39.572065 kubelet[2529]: E0517 00:30:39.572031 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:30:39.572676 containerd[1476]: time="2025-05-17T00:30:39.572638745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64f847d955-rq9qk,Uid:36f314af-d57b-4d7a-90de-e0fe9fafde81,Namespace:calico-system,Attempt:0,}" May 17 00:30:39.592883 kubelet[2529]: I0517 00:30:39.592683 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e5185284-76b5-43ed-b127-2d0fa638e96d-lib-modules\") pod \"calico-node-v5gn8\" (UID: \"e5185284-76b5-43ed-b127-2d0fa638e96d\") " pod="calico-system/calico-node-v5gn8" May 17 00:30:39.592883 kubelet[2529]: I0517 00:30:39.592712 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e5185284-76b5-43ed-b127-2d0fa638e96d-var-lib-calico\") pod \"calico-node-v5gn8\" (UID: \"e5185284-76b5-43ed-b127-2d0fa638e96d\") " pod="calico-system/calico-node-v5gn8" May 17 00:30:39.592883 kubelet[2529]: I0517 00:30:39.592726 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghx6m\" (UniqueName: \"kubernetes.io/projected/e5185284-76b5-43ed-b127-2d0fa638e96d-kube-api-access-ghx6m\") pod \"calico-node-v5gn8\" (UID: \"e5185284-76b5-43ed-b127-2d0fa638e96d\") " pod="calico-system/calico-node-v5gn8" May 17 00:30:39.592883 kubelet[2529]: I0517 00:30:39.592740 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e5185284-76b5-43ed-b127-2d0fa638e96d-cni-log-dir\") pod \"calico-node-v5gn8\" (UID: \"e5185284-76b5-43ed-b127-2d0fa638e96d\") " pod="calico-system/calico-node-v5gn8" May 17 00:30:39.592883 kubelet[2529]: I0517 00:30:39.592752 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e5185284-76b5-43ed-b127-2d0fa638e96d-xtables-lock\") pod \"calico-node-v5gn8\" (UID: \"e5185284-76b5-43ed-b127-2d0fa638e96d\") " pod="calico-system/calico-node-v5gn8" May 17 00:30:39.593119 kubelet[2529]: I0517 00:30:39.592764 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e5185284-76b5-43ed-b127-2d0fa638e96d-cni-net-dir\") pod \"calico-node-v5gn8\" (UID: \"e5185284-76b5-43ed-b127-2d0fa638e96d\") " pod="calico-system/calico-node-v5gn8" May 17 00:30:39.593119 kubelet[2529]: I0517 00:30:39.592775 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e5185284-76b5-43ed-b127-2d0fa638e96d-var-run-calico\") pod \"calico-node-v5gn8\" (UID: \"e5185284-76b5-43ed-b127-2d0fa638e96d\") " pod="calico-system/calico-node-v5gn8" May 17 00:30:39.593119 kubelet[2529]: I0517 00:30:39.592788 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e5185284-76b5-43ed-b127-2d0fa638e96d-node-certs\") pod \"calico-node-v5gn8\" (UID: \"e5185284-76b5-43ed-b127-2d0fa638e96d\") " pod="calico-system/calico-node-v5gn8" May 17 00:30:39.593119 kubelet[2529]: I0517 00:30:39.592801 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5185284-76b5-43ed-b127-2d0fa638e96d-tigera-ca-bundle\") pod \"calico-node-v5gn8\" (UID: \"e5185284-76b5-43ed-b127-2d0fa638e96d\") " pod="calico-system/calico-node-v5gn8" May 17 00:30:39.593119 kubelet[2529]: I0517 00:30:39.592813 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e5185284-76b5-43ed-b127-2d0fa638e96d-flexvol-driver-host\") pod \"calico-node-v5gn8\" (UID: \"e5185284-76b5-43ed-b127-2d0fa638e96d\") " pod="calico-system/calico-node-v5gn8" May 17 00:30:39.593221 kubelet[2529]: I0517 00:30:39.592837 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e5185284-76b5-43ed-b127-2d0fa638e96d-cni-bin-dir\") pod \"calico-node-v5gn8\" (UID: \"e5185284-76b5-43ed-b127-2d0fa638e96d\") " pod="calico-system/calico-node-v5gn8" May 17 00:30:39.593221 kubelet[2529]: I0517 00:30:39.592849 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e5185284-76b5-43ed-b127-2d0fa638e96d-policysync\") pod \"calico-node-v5gn8\" (UID: \"e5185284-76b5-43ed-b127-2d0fa638e96d\") " pod="calico-system/calico-node-v5gn8" May 17 00:30:39.595116 containerd[1476]: time="2025-05-17T00:30:39.594696730Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:30:39.595116 containerd[1476]: time="2025-05-17T00:30:39.594749121Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:30:39.595116 containerd[1476]: time="2025-05-17T00:30:39.594782001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:30:39.595116 containerd[1476]: time="2025-05-17T00:30:39.594874822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:30:39.619564 systemd[1]: Started cri-containerd-ddfceffae3d464e23c5a497e99b7724b602a1fb1950589b664bf76309822ce49.scope - libcontainer container ddfceffae3d464e23c5a497e99b7724b602a1fb1950589b664bf76309822ce49. May 17 00:30:39.656315 containerd[1476]: time="2025-05-17T00:30:39.655033628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64f847d955-rq9qk,Uid:36f314af-d57b-4d7a-90de-e0fe9fafde81,Namespace:calico-system,Attempt:0,} returns sandbox id \"ddfceffae3d464e23c5a497e99b7724b602a1fb1950589b664bf76309822ce49\"" May 17 00:30:39.656937 kubelet[2529]: E0517 00:30:39.656882 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:30:39.657854 containerd[1476]: time="2025-05-17T00:30:39.657759310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 17 00:30:39.694665 kubelet[2529]: E0517 00:30:39.694453 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.694665 kubelet[2529]: W0517 00:30:39.694473 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.694665 kubelet[2529]: E0517 00:30:39.694630 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.696491 kubelet[2529]: E0517 00:30:39.695021 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.696491 kubelet[2529]: W0517 00:30:39.695030 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.696491 kubelet[2529]: E0517 00:30:39.696334 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.696834 kubelet[2529]: E0517 00:30:39.696808 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.696834 kubelet[2529]: W0517 00:30:39.696823 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.696834 kubelet[2529]: E0517 00:30:39.696834 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.697144 kubelet[2529]: E0517 00:30:39.697112 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.697195 kubelet[2529]: W0517 00:30:39.697186 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.697323 kubelet[2529]: E0517 00:30:39.697254 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.697532 kubelet[2529]: E0517 00:30:39.697522 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.697669 kubelet[2529]: W0517 00:30:39.697589 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.697669 kubelet[2529]: E0517 00:30:39.697602 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.698603 kubelet[2529]: E0517 00:30:39.698594 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.698662 kubelet[2529]: W0517 00:30:39.698652 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.699063 kubelet[2529]: E0517 00:30:39.699051 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.699249 kubelet[2529]: E0517 00:30:39.699233 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.699249 kubelet[2529]: W0517 00:30:39.699248 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.699329 kubelet[2529]: E0517 00:30:39.699273 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.699551 kubelet[2529]: E0517 00:30:39.699537 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.699551 kubelet[2529]: W0517 00:30:39.699548 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.699642 kubelet[2529]: E0517 00:30:39.699617 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.699719 kubelet[2529]: E0517 00:30:39.699701 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.699719 kubelet[2529]: W0517 00:30:39.699711 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.699804 kubelet[2529]: E0517 00:30:39.699790 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.699924 kubelet[2529]: E0517 00:30:39.699908 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.699924 kubelet[2529]: W0517 00:30:39.699919 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.700024 kubelet[2529]: E0517 00:30:39.699988 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.700081 kubelet[2529]: E0517 00:30:39.700068 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.700081 kubelet[2529]: W0517 00:30:39.700078 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.700134 kubelet[2529]: E0517 00:30:39.700098 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.700279 kubelet[2529]: E0517 00:30:39.700267 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.700279 kubelet[2529]: W0517 00:30:39.700277 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.700325 kubelet[2529]: E0517 00:30:39.700295 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.700567 kubelet[2529]: E0517 00:30:39.700553 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.700567 kubelet[2529]: W0517 00:30:39.700565 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.700632 kubelet[2529]: E0517 00:30:39.700577 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.700811 kubelet[2529]: E0517 00:30:39.700798 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.700811 kubelet[2529]: W0517 00:30:39.700808 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.700898 kubelet[2529]: E0517 00:30:39.700885 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.701027 kubelet[2529]: E0517 00:30:39.701009 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.701027 kubelet[2529]: W0517 00:30:39.701018 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.701106 kubelet[2529]: E0517 00:30:39.701093 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.701229 kubelet[2529]: E0517 00:30:39.701217 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.701229 kubelet[2529]: W0517 00:30:39.701227 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.701313 kubelet[2529]: E0517 00:30:39.701291 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.701388 kubelet[2529]: E0517 00:30:39.701376 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.701388 kubelet[2529]: W0517 00:30:39.701386 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.701487 kubelet[2529]: E0517 00:30:39.701470 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.701668 kubelet[2529]: E0517 00:30:39.701655 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.701668 kubelet[2529]: W0517 00:30:39.701665 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.701740 kubelet[2529]: E0517 00:30:39.701684 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.701898 kubelet[2529]: E0517 00:30:39.701884 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.701898 kubelet[2529]: W0517 00:30:39.701895 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.701974 kubelet[2529]: E0517 00:30:39.701907 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.702104 kubelet[2529]: E0517 00:30:39.702079 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.702104 kubelet[2529]: W0517 00:30:39.702090 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.702104 kubelet[2529]: E0517 00:30:39.702102 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.702994 kubelet[2529]: E0517 00:30:39.702840 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.702994 kubelet[2529]: W0517 00:30:39.702851 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.702994 kubelet[2529]: E0517 00:30:39.702865 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.703162 kubelet[2529]: E0517 00:30:39.703129 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.703162 kubelet[2529]: W0517 00:30:39.703138 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.703267 kubelet[2529]: E0517 00:30:39.703215 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.703375 kubelet[2529]: E0517 00:30:39.703362 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.703375 kubelet[2529]: W0517 00:30:39.703373 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.703444 kubelet[2529]: E0517 00:30:39.703381 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.778867 kubelet[2529]: E0517 00:30:39.778653 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h9kj7" podUID="0996e84d-dd0b-49e3-addd-0931e48a258e" May 17 00:30:39.783869 kubelet[2529]: E0517 00:30:39.783856 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.783941 kubelet[2529]: W0517 00:30:39.783931 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.784014 kubelet[2529]: E0517 00:30:39.784004 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.784396 kubelet[2529]: E0517 00:30:39.784318 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.784396 kubelet[2529]: W0517 00:30:39.784326 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.784396 kubelet[2529]: E0517 00:30:39.784334 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.784670 kubelet[2529]: E0517 00:30:39.784620 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.784670 kubelet[2529]: W0517 00:30:39.784630 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.784670 kubelet[2529]: E0517 00:30:39.784637 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.785002 kubelet[2529]: E0517 00:30:39.784951 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.785002 kubelet[2529]: W0517 00:30:39.784959 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.785002 kubelet[2529]: E0517 00:30:39.784969 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.785305 kubelet[2529]: E0517 00:30:39.785296 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.785403 kubelet[2529]: W0517 00:30:39.785351 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.785403 kubelet[2529]: E0517 00:30:39.785360 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.785741 kubelet[2529]: E0517 00:30:39.785678 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.785741 kubelet[2529]: W0517 00:30:39.785686 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.785741 kubelet[2529]: E0517 00:30:39.785693 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.786064 kubelet[2529]: E0517 00:30:39.786054 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.786179 kubelet[2529]: W0517 00:30:39.786101 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.786179 kubelet[2529]: E0517 00:30:39.786109 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.786450 kubelet[2529]: E0517 00:30:39.786387 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.786450 kubelet[2529]: W0517 00:30:39.786395 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.786450 kubelet[2529]: E0517 00:30:39.786402 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.786723 kubelet[2529]: E0517 00:30:39.786698 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.786787 kubelet[2529]: W0517 00:30:39.786754 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.786858 kubelet[2529]: E0517 00:30:39.786764 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.787126 kubelet[2529]: E0517 00:30:39.787054 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.787126 kubelet[2529]: W0517 00:30:39.787062 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.787126 kubelet[2529]: E0517 00:30:39.787069 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.787284 kubelet[2529]: E0517 00:30:39.787252 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.787284 kubelet[2529]: W0517 00:30:39.787260 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.787284 kubelet[2529]: E0517 00:30:39.787267 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.787685 kubelet[2529]: E0517 00:30:39.787612 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.787685 kubelet[2529]: W0517 00:30:39.787620 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.787685 kubelet[2529]: E0517 00:30:39.787626 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.787942 kubelet[2529]: E0517 00:30:39.787914 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.788020 kubelet[2529]: W0517 00:30:39.787977 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.788020 kubelet[2529]: E0517 00:30:39.787987 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.788327 kubelet[2529]: E0517 00:30:39.788255 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.788327 kubelet[2529]: W0517 00:30:39.788263 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.788327 kubelet[2529]: E0517 00:30:39.788270 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.788629 kubelet[2529]: E0517 00:30:39.788563 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.788629 kubelet[2529]: W0517 00:30:39.788571 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.788629 kubelet[2529]: E0517 00:30:39.788597 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.788901 kubelet[2529]: E0517 00:30:39.788845 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.788901 kubelet[2529]: W0517 00:30:39.788854 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.788901 kubelet[2529]: E0517 00:30:39.788860 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.789526 kubelet[2529]: E0517 00:30:39.789359 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.789526 kubelet[2529]: W0517 00:30:39.789367 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.789526 kubelet[2529]: E0517 00:30:39.789374 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.789686 kubelet[2529]: E0517 00:30:39.789618 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.789686 kubelet[2529]: W0517 00:30:39.789627 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.789686 kubelet[2529]: E0517 00:30:39.789634 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.789889 kubelet[2529]: E0517 00:30:39.789843 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.789889 kubelet[2529]: W0517 00:30:39.789851 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.789889 kubelet[2529]: E0517 00:30:39.789857 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.790275 kubelet[2529]: E0517 00:30:39.790196 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.790275 kubelet[2529]: W0517 00:30:39.790206 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.790275 kubelet[2529]: E0517 00:30:39.790212 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.793646 kubelet[2529]: E0517 00:30:39.793546 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.793646 kubelet[2529]: W0517 00:30:39.793556 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.793646 kubelet[2529]: E0517 00:30:39.793564 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.793646 kubelet[2529]: I0517 00:30:39.793581 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0996e84d-dd0b-49e3-addd-0931e48a258e-kubelet-dir\") pod \"csi-node-driver-h9kj7\" (UID: \"0996e84d-dd0b-49e3-addd-0931e48a258e\") " pod="calico-system/csi-node-driver-h9kj7" May 17 00:30:39.794008 kubelet[2529]: E0517 00:30:39.793920 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.794008 kubelet[2529]: W0517 00:30:39.793929 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.794008 kubelet[2529]: E0517 00:30:39.793936 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.794008 kubelet[2529]: I0517 00:30:39.793947 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0996e84d-dd0b-49e3-addd-0931e48a258e-socket-dir\") pod \"csi-node-driver-h9kj7\" (UID: \"0996e84d-dd0b-49e3-addd-0931e48a258e\") " pod="calico-system/csi-node-driver-h9kj7" May 17 00:30:39.794336 kubelet[2529]: E0517 00:30:39.794247 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.794336 kubelet[2529]: W0517 00:30:39.794257 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.794336 kubelet[2529]: E0517 00:30:39.794279 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.794336 kubelet[2529]: I0517 00:30:39.794290 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j4mj\" (UniqueName: \"kubernetes.io/projected/0996e84d-dd0b-49e3-addd-0931e48a258e-kube-api-access-7j4mj\") pod \"csi-node-driver-h9kj7\" (UID: \"0996e84d-dd0b-49e3-addd-0931e48a258e\") " pod="calico-system/csi-node-driver-h9kj7" May 17 00:30:39.794720 kubelet[2529]: E0517 00:30:39.794638 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.794720 kubelet[2529]: W0517 00:30:39.794656 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.794720 kubelet[2529]: E0517 00:30:39.794667 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.794720 kubelet[2529]: I0517 00:30:39.794679 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0996e84d-dd0b-49e3-addd-0931e48a258e-varrun\") pod \"csi-node-driver-h9kj7\" (UID: \"0996e84d-dd0b-49e3-addd-0931e48a258e\") " pod="calico-system/csi-node-driver-h9kj7" May 17 00:30:39.795111 kubelet[2529]: E0517 00:30:39.795011 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.795111 kubelet[2529]: W0517 00:30:39.795020 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.795111 kubelet[2529]: E0517 00:30:39.795034 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.795261 kubelet[2529]: I0517 00:30:39.795216 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0996e84d-dd0b-49e3-addd-0931e48a258e-registration-dir\") pod \"csi-node-driver-h9kj7\" (UID: \"0996e84d-dd0b-49e3-addd-0931e48a258e\") " pod="calico-system/csi-node-driver-h9kj7" May 17 00:30:39.795407 kubelet[2529]: E0517 00:30:39.795322 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.795407 kubelet[2529]: W0517 00:30:39.795330 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.795407 kubelet[2529]: E0517 00:30:39.795340 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.795652 kubelet[2529]: E0517 00:30:39.795576 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.795652 kubelet[2529]: W0517 00:30:39.795585 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.795652 kubelet[2529]: E0517 00:30:39.795592 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.795857 kubelet[2529]: E0517 00:30:39.795839 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.795857 kubelet[2529]: W0517 00:30:39.795847 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.795953 kubelet[2529]: E0517 00:30:39.795925 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.796233 kubelet[2529]: E0517 00:30:39.796158 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.796233 kubelet[2529]: W0517 00:30:39.796166 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.796319 kubelet[2529]: E0517 00:30:39.796305 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.796496 kubelet[2529]: E0517 00:30:39.796405 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.796496 kubelet[2529]: W0517 00:30:39.796413 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.796567 kubelet[2529]: E0517 00:30:39.796531 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.796804 kubelet[2529]: E0517 00:30:39.796757 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.796804 kubelet[2529]: W0517 00:30:39.796765 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.796936 kubelet[2529]: E0517 00:30:39.796875 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.797039 kubelet[2529]: E0517 00:30:39.797031 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.797101 kubelet[2529]: W0517 00:30:39.797085 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.797213 kubelet[2529]: E0517 00:30:39.797159 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.797325 kubelet[2529]: E0517 00:30:39.797307 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.797415 kubelet[2529]: W0517 00:30:39.797367 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.797415 kubelet[2529]: E0517 00:30:39.797378 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.797751 kubelet[2529]: E0517 00:30:39.797666 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.797751 kubelet[2529]: W0517 00:30:39.797674 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.797751 kubelet[2529]: E0517 00:30:39.797681 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.797881 kubelet[2529]: E0517 00:30:39.797872 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.797957 kubelet[2529]: W0517 00:30:39.797933 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.797957 kubelet[2529]: E0517 00:30:39.797944 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.861284 containerd[1476]: time="2025-05-17T00:30:39.861146092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v5gn8,Uid:e5185284-76b5-43ed-b127-2d0fa638e96d,Namespace:calico-system,Attempt:0,}" May 17 00:30:39.882293 containerd[1476]: time="2025-05-17T00:30:39.882119784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:30:39.882293 containerd[1476]: time="2025-05-17T00:30:39.882168605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:30:39.882293 containerd[1476]: time="2025-05-17T00:30:39.882181345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:30:39.882293 containerd[1476]: time="2025-05-17T00:30:39.882251486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:30:39.896102 kubelet[2529]: E0517 00:30:39.896079 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.896233 kubelet[2529]: W0517 00:30:39.896162 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.896233 kubelet[2529]: E0517 00:30:39.896180 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.896488 kubelet[2529]: E0517 00:30:39.896478 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.896546 kubelet[2529]: W0517 00:30:39.896521 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.896545 systemd[1]: Started cri-containerd-797e184da22ec60687712cf3abe6e2c52d537e7e00acb4c08383138c4d843a5b.scope - libcontainer container 797e184da22ec60687712cf3abe6e2c52d537e7e00acb4c08383138c4d843a5b. May 17 00:30:39.896722 kubelet[2529]: E0517 00:30:39.896639 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.896983 kubelet[2529]: E0517 00:30:39.896947 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.897153 kubelet[2529]: W0517 00:30:39.897074 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.897153 kubelet[2529]: E0517 00:30:39.897085 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.897411 kubelet[2529]: E0517 00:30:39.897402 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.897494 kubelet[2529]: W0517 00:30:39.897476 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.897637 kubelet[2529]: E0517 00:30:39.897626 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.898315 kubelet[2529]: E0517 00:30:39.898305 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.898627 kubelet[2529]: W0517 00:30:39.898542 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.898627 kubelet[2529]: E0517 00:30:39.898558 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.898845 kubelet[2529]: E0517 00:30:39.898782 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.899108 kubelet[2529]: W0517 00:30:39.899064 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.899171 kubelet[2529]: E0517 00:30:39.899138 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.900171 kubelet[2529]: E0517 00:30:39.900119 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.900171 kubelet[2529]: W0517 00:30:39.900130 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.900171 kubelet[2529]: E0517 00:30:39.900190 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.900512 kubelet[2529]: E0517 00:30:39.900465 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.900512 kubelet[2529]: W0517 00:30:39.900473 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.900643 kubelet[2529]: E0517 00:30:39.900592 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.900863 kubelet[2529]: E0517 00:30:39.900767 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.900863 kubelet[2529]: W0517 00:30:39.900775 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.900931 kubelet[2529]: E0517 00:30:39.900921 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.901042 kubelet[2529]: E0517 00:30:39.901034 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.901110 kubelet[2529]: W0517 00:30:39.901100 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.901203 kubelet[2529]: E0517 00:30:39.901178 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.901488 kubelet[2529]: E0517 00:30:39.901456 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.901488 kubelet[2529]: W0517 00:30:39.901465 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.901623 kubelet[2529]: E0517 00:30:39.901577 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.901837 kubelet[2529]: E0517 00:30:39.901811 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.901837 kubelet[2529]: W0517 00:30:39.901820 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.901986 kubelet[2529]: E0517 00:30:39.901976 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.902169 kubelet[2529]: E0517 00:30:39.902150 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.902169 kubelet[2529]: W0517 00:30:39.902158 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.902321 kubelet[2529]: E0517 00:30:39.902280 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.902476 kubelet[2529]: E0517 00:30:39.902467 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.902562 kubelet[2529]: W0517 00:30:39.902517 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.902628 kubelet[2529]: E0517 00:30:39.902613 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.902819 kubelet[2529]: E0517 00:30:39.902794 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.902819 kubelet[2529]: W0517 00:30:39.902802 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.902968 kubelet[2529]: E0517 00:30:39.902959 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.903103 kubelet[2529]: E0517 00:30:39.903095 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.903150 kubelet[2529]: W0517 00:30:39.903141 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.903250 kubelet[2529]: E0517 00:30:39.903241 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.903498 kubelet[2529]: E0517 00:30:39.903417 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.903498 kubelet[2529]: W0517 00:30:39.903438 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.903559 kubelet[2529]: E0517 00:30:39.903549 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.903648 kubelet[2529]: E0517 00:30:39.903640 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.903717 kubelet[2529]: W0517 00:30:39.903682 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.903774 kubelet[2529]: E0517 00:30:39.903748 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.903939 kubelet[2529]: E0517 00:30:39.903921 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.903939 kubelet[2529]: W0517 00:30:39.903929 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.904101 kubelet[2529]: E0517 00:30:39.904070 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.904229 kubelet[2529]: E0517 00:30:39.904221 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.904655 kubelet[2529]: W0517 00:30:39.904258 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.904946 kubelet[2529]: E0517 00:30:39.904806 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.905066 kubelet[2529]: E0517 00:30:39.905026 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.905523 kubelet[2529]: W0517 00:30:39.905508 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.905659 kubelet[2529]: E0517 00:30:39.905650 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.906729 kubelet[2529]: E0517 00:30:39.906718 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.906844 kubelet[2529]: W0517 00:30:39.906766 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.906844 kubelet[2529]: E0517 00:30:39.906780 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.908246 kubelet[2529]: E0517 00:30:39.908214 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.908246 kubelet[2529]: W0517 00:30:39.908234 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.908360 kubelet[2529]: E0517 00:30:39.908308 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.910476 kubelet[2529]: E0517 00:30:39.909935 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.910476 kubelet[2529]: W0517 00:30:39.909945 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.910875 kubelet[2529]: E0517 00:30:39.910829 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.911553 kubelet[2529]: E0517 00:30:39.911539 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.911553 kubelet[2529]: W0517 00:30:39.911549 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.911610 kubelet[2529]: E0517 00:30:39.911558 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.919338 kubelet[2529]: E0517 00:30:39.919276 2529 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:30:39.919338 kubelet[2529]: W0517 00:30:39.919291 2529 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:30:39.919338 kubelet[2529]: E0517 00:30:39.919308 2529 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:30:39.924812 containerd[1476]: time="2025-05-17T00:30:39.924761208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v5gn8,Uid:e5185284-76b5-43ed-b127-2d0fa638e96d,Namespace:calico-system,Attempt:0,} returns sandbox id \"797e184da22ec60687712cf3abe6e2c52d537e7e00acb4c08383138c4d843a5b\"" May 17 00:30:40.406309 systemd[1]: run-containerd-runc-k8s.io-ddfceffae3d464e23c5a497e99b7724b602a1fb1950589b664bf76309822ce49-runc.Ztd1eX.mount: Deactivated successfully. May 17 00:30:40.699651 containerd[1476]: time="2025-05-17T00:30:40.699422013Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:40.700302 containerd[1476]: time="2025-05-17T00:30:40.700193731Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=35158669" May 17 00:30:40.700879 containerd[1476]: time="2025-05-17T00:30:40.700692786Z" level=info msg="ImageCreate event name:\"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:40.702114 containerd[1476]: time="2025-05-17T00:30:40.702088972Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:40.702639 containerd[1476]: time="2025-05-17T00:30:40.702611787Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"35158523\" in 1.044821927s" May 17 00:30:40.702667 containerd[1476]: time="2025-05-17T00:30:40.702638038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\"" May 17 00:30:40.703445 containerd[1476]: time="2025-05-17T00:30:40.703412526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 17 00:30:40.714661 containerd[1476]: time="2025-05-17T00:30:40.714631498Z" level=info msg="CreateContainer within sandbox \"ddfceffae3d464e23c5a497e99b7724b602a1fb1950589b664bf76309822ce49\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 17 00:30:40.721616 containerd[1476]: time="2025-05-17T00:30:40.721587523Z" level=info msg="CreateContainer within sandbox \"ddfceffae3d464e23c5a497e99b7724b602a1fb1950589b664bf76309822ce49\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ea378bf67242cac3c14811ff66c436495b3b04a4531d4b7a0e896c1be74de53a\"" May 17 00:30:40.723418 containerd[1476]: time="2025-05-17T00:30:40.723246501Z" level=info msg="StartContainer for \"ea378bf67242cac3c14811ff66c436495b3b04a4531d4b7a0e896c1be74de53a\"" May 17 00:30:40.744540 systemd[1]: Started cri-containerd-ea378bf67242cac3c14811ff66c436495b3b04a4531d4b7a0e896c1be74de53a.scope - libcontainer container ea378bf67242cac3c14811ff66c436495b3b04a4531d4b7a0e896c1be74de53a. May 17 00:30:40.784320 containerd[1476]: time="2025-05-17T00:30:40.784259523Z" level=info msg="StartContainer for \"ea378bf67242cac3c14811ff66c436495b3b04a4531d4b7a0e896c1be74de53a\" returns successfully" May 17 00:30:41.326262 containerd[1476]: time="2025-05-17T00:30:41.326214409Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:41.327019 containerd[1476]: time="2025-05-17T00:30:41.326987357Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=4441619" May 17 00:30:41.327612 containerd[1476]: time="2025-05-17T00:30:41.327578303Z" level=info msg="ImageCreate event name:\"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:41.329533 containerd[1476]: time="2025-05-17T00:30:41.329016428Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:41.329533 containerd[1476]: time="2025-05-17T00:30:41.329417542Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5934282\" in 625.957566ms" May 17 00:30:41.329533 containerd[1476]: time="2025-05-17T00:30:41.329465043Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\"" May 17 00:30:41.331377 containerd[1476]: time="2025-05-17T00:30:41.331343682Z" level=info msg="CreateContainer within sandbox \"797e184da22ec60687712cf3abe6e2c52d537e7e00acb4c08383138c4d843a5b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 17 00:30:41.338610 containerd[1476]: time="2025-05-17T00:30:41.338578315Z" level=info msg="CreateContainer within sandbox \"797e184da22ec60687712cf3abe6e2c52d537e7e00acb4c08383138c4d843a5b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4e1c5a285ffb421fc0ec196058c39cbef98b9ec395350f4105ffc145383ce8ae\"" May 17 00:30:41.339848 containerd[1476]: time="2025-05-17T00:30:41.338897668Z" level=info msg="StartContainer for \"4e1c5a285ffb421fc0ec196058c39cbef98b9ec395350f4105ffc145383ce8ae\"" May 17 00:30:41.364551 systemd[1]: Started cri-containerd-4e1c5a285ffb421fc0ec196058c39cbef98b9ec395350f4105ffc145383ce8ae.scope - libcontainer container 4e1c5a285ffb421fc0ec196058c39cbef98b9ec395350f4105ffc145383ce8ae. May 17 00:30:41.386351 containerd[1476]: time="2025-05-17T00:30:41.386332141Z" level=info msg="StartContainer for \"4e1c5a285ffb421fc0ec196058c39cbef98b9ec395350f4105ffc145383ce8ae\" returns successfully" May 17 00:30:41.404116 systemd[1]: cri-containerd-4e1c5a285ffb421fc0ec196058c39cbef98b9ec395350f4105ffc145383ce8ae.scope: Deactivated successfully. May 17 00:30:41.423409 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e1c5a285ffb421fc0ec196058c39cbef98b9ec395350f4105ffc145383ce8ae-rootfs.mount: Deactivated successfully. May 17 00:30:41.459263 containerd[1476]: time="2025-05-17T00:30:41.459006709Z" level=info msg="shim disconnected" id=4e1c5a285ffb421fc0ec196058c39cbef98b9ec395350f4105ffc145383ce8ae namespace=k8s.io May 17 00:30:41.459263 containerd[1476]: time="2025-05-17T00:30:41.459092760Z" level=warning msg="cleaning up after shim disconnected" id=4e1c5a285ffb421fc0ec196058c39cbef98b9ec395350f4105ffc145383ce8ae namespace=k8s.io May 17 00:30:41.459263 containerd[1476]: time="2025-05-17T00:30:41.459145121Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:30:41.472612 containerd[1476]: time="2025-05-17T00:30:41.472577107Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:30:41Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 17 00:30:41.692060 kubelet[2529]: E0517 00:30:41.692027 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h9kj7" podUID="0996e84d-dd0b-49e3-addd-0931e48a258e" May 17 00:30:41.770457 kubelet[2529]: E0517 00:30:41.768331 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:30:41.775465 containerd[1476]: time="2025-05-17T00:30:41.775407646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 17 00:30:41.779094 kubelet[2529]: I0517 00:30:41.778553 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-64f847d955-rq9qk" podStartSLOduration=1.73275574 podStartE2EDuration="2.778540038s" podCreationTimestamp="2025-05-17 00:30:39 +0000 UTC" firstStartedPulling="2025-05-17 00:30:39.657534927 +0000 UTC m=+17.058024718" lastFinishedPulling="2025-05-17 00:30:40.703319215 +0000 UTC m=+18.103809016" observedRunningTime="2025-05-17 00:30:41.777695179 +0000 UTC m=+19.178184980" watchObservedRunningTime="2025-05-17 00:30:41.778540038 +0000 UTC m=+19.179029829" May 17 00:30:42.774755 kubelet[2529]: I0517 00:30:42.774611 2529 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:30:42.775892 kubelet[2529]: E0517 00:30:42.775385 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:30:43.300282 containerd[1476]: time="2025-05-17T00:30:43.300233203Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:43.301274 containerd[1476]: time="2025-05-17T00:30:43.301226162Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=70300568" May 17 00:30:43.301706 containerd[1476]: time="2025-05-17T00:30:43.301467784Z" level=info msg="ImageCreate event name:\"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:43.302916 containerd[1476]: time="2025-05-17T00:30:43.302893477Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:43.303495 containerd[1476]: time="2025-05-17T00:30:43.303458802Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"71793271\" in 1.528000306s" May 17 00:30:43.303495 containerd[1476]: time="2025-05-17T00:30:43.303486713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\"" May 17 00:30:43.305319 containerd[1476]: time="2025-05-17T00:30:43.305301409Z" level=info msg="CreateContainer within sandbox \"797e184da22ec60687712cf3abe6e2c52d537e7e00acb4c08383138c4d843a5b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 17 00:30:43.328074 containerd[1476]: time="2025-05-17T00:30:43.328023802Z" level=info msg="CreateContainer within sandbox \"797e184da22ec60687712cf3abe6e2c52d537e7e00acb4c08383138c4d843a5b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"78a3da825a7eb5dbd0009ed52744e818743bd159d326bfde08a2e9fc447c9e33\"" May 17 00:30:43.328627 containerd[1476]: time="2025-05-17T00:30:43.328606427Z" level=info msg="StartContainer for \"78a3da825a7eb5dbd0009ed52744e818743bd159d326bfde08a2e9fc447c9e33\"" May 17 00:30:43.351539 systemd[1]: run-containerd-runc-k8s.io-78a3da825a7eb5dbd0009ed52744e818743bd159d326bfde08a2e9fc447c9e33-runc.bpRpH4.mount: Deactivated successfully. May 17 00:30:43.358539 systemd[1]: Started cri-containerd-78a3da825a7eb5dbd0009ed52744e818743bd159d326bfde08a2e9fc447c9e33.scope - libcontainer container 78a3da825a7eb5dbd0009ed52744e818743bd159d326bfde08a2e9fc447c9e33. May 17 00:30:43.382853 containerd[1476]: time="2025-05-17T00:30:43.382823101Z" level=info msg="StartContainer for \"78a3da825a7eb5dbd0009ed52744e818743bd159d326bfde08a2e9fc447c9e33\" returns successfully" May 17 00:30:43.692382 kubelet[2529]: E0517 00:30:43.692334 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h9kj7" podUID="0996e84d-dd0b-49e3-addd-0931e48a258e" May 17 00:30:43.780904 containerd[1476]: time="2025-05-17T00:30:43.778598528Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:30:43.782864 systemd[1]: cri-containerd-78a3da825a7eb5dbd0009ed52744e818743bd159d326bfde08a2e9fc447c9e33.scope: Deactivated successfully. May 17 00:30:43.859354 containerd[1476]: time="2025-05-17T00:30:43.859294819Z" level=info msg="shim disconnected" id=78a3da825a7eb5dbd0009ed52744e818743bd159d326bfde08a2e9fc447c9e33 namespace=k8s.io May 17 00:30:43.859586 containerd[1476]: time="2025-05-17T00:30:43.859571831Z" level=warning msg="cleaning up after shim disconnected" id=78a3da825a7eb5dbd0009ed52744e818743bd159d326bfde08a2e9fc447c9e33 namespace=k8s.io May 17 00:30:43.859646 containerd[1476]: time="2025-05-17T00:30:43.859620932Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:30:43.866786 kubelet[2529]: I0517 00:30:43.866764 2529 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 17 00:30:43.891938 systemd[1]: Created slice kubepods-burstable-podeb74581d_78ad_4419_8238_b440c64be7cd.slice - libcontainer container kubepods-burstable-podeb74581d_78ad_4419_8238_b440c64be7cd.slice. May 17 00:30:43.906197 systemd[1]: Created slice kubepods-burstable-pod02488dc1_7388_4c3e_bda7_2622333fb0c8.slice - libcontainer container kubepods-burstable-pod02488dc1_7388_4c3e_bda7_2622333fb0c8.slice. May 17 00:30:43.914941 systemd[1]: Created slice kubepods-besteffort-pod16a8edb6_95df_4bc2_a130_7cc52db94763.slice - libcontainer container kubepods-besteffort-pod16a8edb6_95df_4bc2_a130_7cc52db94763.slice. May 17 00:30:43.924375 systemd[1]: Created slice kubepods-besteffort-pod7e0aafb5_c219_4523_9e5b_1fe312a4aa2d.slice - libcontainer container kubepods-besteffort-pod7e0aafb5_c219_4523_9e5b_1fe312a4aa2d.slice. May 17 00:30:43.925792 kubelet[2529]: I0517 00:30:43.925279 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbsbj\" (UniqueName: \"kubernetes.io/projected/05eee7e2-28d6-4731-8213-05c2e2b3f360-kube-api-access-rbsbj\") pod \"whisker-57fb894b7c-5tcq4\" (UID: \"05eee7e2-28d6-4731-8213-05c2e2b3f360\") " pod="calico-system/whisker-57fb894b7c-5tcq4" May 17 00:30:43.925792 kubelet[2529]: I0517 00:30:43.925310 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4bld\" (UniqueName: \"kubernetes.io/projected/dd5a12e6-0476-4ee4-9663-5e2d40e20810-kube-api-access-q4bld\") pod \"calico-apiserver-59c6b49969-lmb87\" (UID: \"dd5a12e6-0476-4ee4-9663-5e2d40e20810\") " pod="calico-apiserver/calico-apiserver-59c6b49969-lmb87" May 17 00:30:43.925792 kubelet[2529]: I0517 00:30:43.925327 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb74581d-78ad-4419-8238-b440c64be7cd-config-volume\") pod \"coredns-7c65d6cfc9-q7n6z\" (UID: \"eb74581d-78ad-4419-8238-b440c64be7cd\") " pod="kube-system/coredns-7c65d6cfc9-q7n6z" May 17 00:30:43.925792 kubelet[2529]: I0517 00:30:43.925340 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f42rx\" (UniqueName: \"kubernetes.io/projected/02488dc1-7388-4c3e-bda7-2622333fb0c8-kube-api-access-f42rx\") pod \"coredns-7c65d6cfc9-wvn6c\" (UID: \"02488dc1-7388-4c3e-bda7-2622333fb0c8\") " pod="kube-system/coredns-7c65d6cfc9-wvn6c" May 17 00:30:43.925792 kubelet[2529]: I0517 00:30:43.925353 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee80876b-aa39-4375-a4e1-fd4e85f8d3ee-config\") pod \"goldmane-8f77d7b6c-s52mw\" (UID: \"ee80876b-aa39-4375-a4e1-fd4e85f8d3ee\") " pod="calico-system/goldmane-8f77d7b6c-s52mw" May 17 00:30:43.925955 kubelet[2529]: I0517 00:30:43.925364 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp8vf\" (UniqueName: \"kubernetes.io/projected/eb74581d-78ad-4419-8238-b440c64be7cd-kube-api-access-bp8vf\") pod \"coredns-7c65d6cfc9-q7n6z\" (UID: \"eb74581d-78ad-4419-8238-b440c64be7cd\") " pod="kube-system/coredns-7c65d6cfc9-q7n6z" May 17 00:30:43.925955 kubelet[2529]: I0517 00:30:43.925375 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2znr\" (UniqueName: \"kubernetes.io/projected/16a8edb6-95df-4bc2-a130-7cc52db94763-kube-api-access-q2znr\") pod \"calico-apiserver-7cf648ccbb-chqjq\" (UID: \"16a8edb6-95df-4bc2-a130-7cc52db94763\") " pod="calico-apiserver/calico-apiserver-7cf648ccbb-chqjq" May 17 00:30:43.925955 kubelet[2529]: I0517 00:30:43.925387 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wwlm\" (UniqueName: \"kubernetes.io/projected/f71e5f0b-7c52-4c28-8833-5eea34a70a67-kube-api-access-5wwlm\") pod \"calico-kube-controllers-96dc47b75-xvwdn\" (UID: \"f71e5f0b-7c52-4c28-8833-5eea34a70a67\") " pod="calico-system/calico-kube-controllers-96dc47b75-xvwdn" May 17 00:30:43.925955 kubelet[2529]: I0517 00:30:43.925409 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/dd5a12e6-0476-4ee4-9663-5e2d40e20810-calico-apiserver-certs\") pod \"calico-apiserver-59c6b49969-lmb87\" (UID: \"dd5a12e6-0476-4ee4-9663-5e2d40e20810\") " pod="calico-apiserver/calico-apiserver-59c6b49969-lmb87" May 17 00:30:43.925955 kubelet[2529]: I0517 00:30:43.925441 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/16a8edb6-95df-4bc2-a130-7cc52db94763-calico-apiserver-certs\") pod \"calico-apiserver-7cf648ccbb-chqjq\" (UID: \"16a8edb6-95df-4bc2-a130-7cc52db94763\") " pod="calico-apiserver/calico-apiserver-7cf648ccbb-chqjq" May 17 00:30:43.926062 kubelet[2529]: I0517 00:30:43.925455 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7e0aafb5-c219-4523-9e5b-1fe312a4aa2d-calico-apiserver-certs\") pod \"calico-apiserver-7cf648ccbb-wj8jt\" (UID: \"7e0aafb5-c219-4523-9e5b-1fe312a4aa2d\") " pod="calico-apiserver/calico-apiserver-7cf648ccbb-wj8jt" May 17 00:30:43.926062 kubelet[2529]: I0517 00:30:43.925467 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnr67\" (UniqueName: \"kubernetes.io/projected/ee80876b-aa39-4375-a4e1-fd4e85f8d3ee-kube-api-access-bnr67\") pod \"goldmane-8f77d7b6c-s52mw\" (UID: \"ee80876b-aa39-4375-a4e1-fd4e85f8d3ee\") " pod="calico-system/goldmane-8f77d7b6c-s52mw" May 17 00:30:43.926062 kubelet[2529]: I0517 00:30:43.925479 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02488dc1-7388-4c3e-bda7-2622333fb0c8-config-volume\") pod \"coredns-7c65d6cfc9-wvn6c\" (UID: \"02488dc1-7388-4c3e-bda7-2622333fb0c8\") " pod="kube-system/coredns-7c65d6cfc9-wvn6c" May 17 00:30:43.926062 kubelet[2529]: I0517 00:30:43.925491 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/05eee7e2-28d6-4731-8213-05c2e2b3f360-whisker-backend-key-pair\") pod \"whisker-57fb894b7c-5tcq4\" (UID: \"05eee7e2-28d6-4731-8213-05c2e2b3f360\") " pod="calico-system/whisker-57fb894b7c-5tcq4" May 17 00:30:43.926062 kubelet[2529]: I0517 00:30:43.925510 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxlkq\" (UniqueName: \"kubernetes.io/projected/7e0aafb5-c219-4523-9e5b-1fe312a4aa2d-kube-api-access-jxlkq\") pod \"calico-apiserver-7cf648ccbb-wj8jt\" (UID: \"7e0aafb5-c219-4523-9e5b-1fe312a4aa2d\") " pod="calico-apiserver/calico-apiserver-7cf648ccbb-wj8jt" May 17 00:30:43.926157 kubelet[2529]: I0517 00:30:43.925523 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ee80876b-aa39-4375-a4e1-fd4e85f8d3ee-goldmane-key-pair\") pod \"goldmane-8f77d7b6c-s52mw\" (UID: \"ee80876b-aa39-4375-a4e1-fd4e85f8d3ee\") " pod="calico-system/goldmane-8f77d7b6c-s52mw" May 17 00:30:43.926157 kubelet[2529]: I0517 00:30:43.925543 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f71e5f0b-7c52-4c28-8833-5eea34a70a67-tigera-ca-bundle\") pod \"calico-kube-controllers-96dc47b75-xvwdn\" (UID: \"f71e5f0b-7c52-4c28-8833-5eea34a70a67\") " pod="calico-system/calico-kube-controllers-96dc47b75-xvwdn" May 17 00:30:43.926157 kubelet[2529]: I0517 00:30:43.925554 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee80876b-aa39-4375-a4e1-fd4e85f8d3ee-goldmane-ca-bundle\") pod \"goldmane-8f77d7b6c-s52mw\" (UID: \"ee80876b-aa39-4375-a4e1-fd4e85f8d3ee\") " pod="calico-system/goldmane-8f77d7b6c-s52mw" May 17 00:30:43.926157 kubelet[2529]: I0517 00:30:43.925565 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05eee7e2-28d6-4731-8213-05c2e2b3f360-whisker-ca-bundle\") pod \"whisker-57fb894b7c-5tcq4\" (UID: \"05eee7e2-28d6-4731-8213-05c2e2b3f360\") " pod="calico-system/whisker-57fb894b7c-5tcq4" May 17 00:30:43.938232 systemd[1]: Created slice kubepods-besteffort-podf71e5f0b_7c52_4c28_8833_5eea34a70a67.slice - libcontainer container kubepods-besteffort-podf71e5f0b_7c52_4c28_8833_5eea34a70a67.slice. May 17 00:30:43.947233 systemd[1]: Created slice kubepods-besteffort-podee80876b_aa39_4375_a4e1_fd4e85f8d3ee.slice - libcontainer container kubepods-besteffort-podee80876b_aa39_4375_a4e1_fd4e85f8d3ee.slice. May 17 00:30:43.956057 systemd[1]: Created slice kubepods-besteffort-poddd5a12e6_0476_4ee4_9663_5e2d40e20810.slice - libcontainer container kubepods-besteffort-poddd5a12e6_0476_4ee4_9663_5e2d40e20810.slice. May 17 00:30:43.960248 systemd[1]: Created slice kubepods-besteffort-pod05eee7e2_28d6_4731_8213_05c2e2b3f360.slice - libcontainer container kubepods-besteffort-pod05eee7e2_28d6_4731_8213_05c2e2b3f360.slice. May 17 00:30:44.201025 kubelet[2529]: E0517 00:30:44.200549 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:30:44.201315 containerd[1476]: time="2025-05-17T00:30:44.201203022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-q7n6z,Uid:eb74581d-78ad-4419-8238-b440c64be7cd,Namespace:kube-system,Attempt:0,}" May 17 00:30:44.216095 kubelet[2529]: E0517 00:30:44.216070 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:30:44.218228 containerd[1476]: time="2025-05-17T00:30:44.217436438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wvn6c,Uid:02488dc1-7388-4c3e-bda7-2622333fb0c8,Namespace:kube-system,Attempt:0,}" May 17 00:30:44.221870 containerd[1476]: time="2025-05-17T00:30:44.221848765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cf648ccbb-chqjq,Uid:16a8edb6-95df-4bc2-a130-7cc52db94763,Namespace:calico-apiserver,Attempt:0,}" May 17 00:30:44.235063 containerd[1476]: time="2025-05-17T00:30:44.235034176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cf648ccbb-wj8jt,Uid:7e0aafb5-c219-4523-9e5b-1fe312a4aa2d,Namespace:calico-apiserver,Attempt:0,}" May 17 00:30:44.245323 containerd[1476]: time="2025-05-17T00:30:44.245010679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-96dc47b75-xvwdn,Uid:f71e5f0b-7c52-4c28-8833-5eea34a70a67,Namespace:calico-system,Attempt:0,}" May 17 00:30:44.254127 containerd[1476]: time="2025-05-17T00:30:44.254096655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-s52mw,Uid:ee80876b-aa39-4375-a4e1-fd4e85f8d3ee,Namespace:calico-system,Attempt:0,}" May 17 00:30:44.260749 containerd[1476]: time="2025-05-17T00:30:44.260229977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59c6b49969-lmb87,Uid:dd5a12e6-0476-4ee4-9663-5e2d40e20810,Namespace:calico-apiserver,Attempt:0,}" May 17 00:30:44.264151 containerd[1476]: time="2025-05-17T00:30:44.264130349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57fb894b7c-5tcq4,Uid:05eee7e2-28d6-4731-8213-05c2e2b3f360,Namespace:calico-system,Attempt:0,}" May 17 00:30:44.315986 containerd[1476]: time="2025-05-17T00:30:44.315861923Z" level=error msg="Failed to destroy network for sandbox \"d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:44.317454 containerd[1476]: time="2025-05-17T00:30:44.317326105Z" level=error msg="encountered an error cleaning up failed sandbox \"d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:44.317899 containerd[1476]: time="2025-05-17T00:30:44.317858479Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-q7n6z,Uid:eb74581d-78ad-4419-8238-b440c64be7cd,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:44.318261 kubelet[2529]: E0517 00:30:44.318195 2529 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:44.318537 kubelet[2529]: E0517 00:30:44.318409 2529 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-q7n6z" May 17 00:30:44.318537 kubelet[2529]: E0517 00:30:44.318471 2529 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-q7n6z" May 17 00:30:44.318863 kubelet[2529]: E0517 00:30:44.318517 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-q7n6z_kube-system(eb74581d-78ad-4419-8238-b440c64be7cd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-q7n6z_kube-system(eb74581d-78ad-4419-8238-b440c64be7cd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-q7n6z" podUID="eb74581d-78ad-4419-8238-b440c64be7cd" May 17 00:30:44.339998 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-78a3da825a7eb5dbd0009ed52744e818743bd159d326bfde08a2e9fc447c9e33-rootfs.mount: Deactivated successfully. May 17 00:30:44.346093 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71-shm.mount: Deactivated successfully. May 17 00:30:44.444600 containerd[1476]: time="2025-05-17T00:30:44.442205041Z" level=error msg="Failed to destroy network for sandbox \"2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:44.444600 containerd[1476]: time="2025-05-17T00:30:44.442821706Z" level=error msg="encountered an error cleaning up failed sandbox \"2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:44.444600 containerd[1476]: time="2025-05-17T00:30:44.444514630Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wvn6c,Uid:02488dc1-7388-4c3e-bda7-2622333fb0c8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:44.444412 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341-shm.mount: Deactivated successfully. May 17 00:30:44.446808 kubelet[2529]: E0517 00:30:44.444961 2529 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:44.446808 kubelet[2529]: E0517 00:30:44.445017 2529 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-wvn6c" May 17 00:30:44.446808 kubelet[2529]: E0517 00:30:44.445032 2529 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-wvn6c" May 17 00:30:44.446899 kubelet[2529]: E0517 00:30:44.445134 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-wvn6c_kube-system(02488dc1-7388-4c3e-bda7-2622333fb0c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-wvn6c_kube-system(02488dc1-7388-4c3e-bda7-2622333fb0c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-wvn6c" podUID="02488dc1-7388-4c3e-bda7-2622333fb0c8" May 17 00:30:44.447081 containerd[1476]: time="2025-05-17T00:30:44.447048721Z" level=error msg="Failed to destroy network for sandbox \"436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:44.448832 containerd[1476]: time="2025-05-17T00:30:44.448593804Z" level=error msg="encountered an error cleaning up failed sandbox \"436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:44.449454 containerd[1476]: time="2025-05-17T00:30:44.448910887Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cf648ccbb-wj8jt,Uid:7e0aafb5-c219-4523-9e5b-1fe312a4aa2d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:44.449678 kubelet[2529]: E0517 00:30:44.449647 2529 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:44.449714 kubelet[2529]: E0517 00:30:44.449692 2529 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cf648ccbb-wj8jt" May 17 00:30:44.449714 kubelet[2529]: E0517 00:30:44.449710 2529 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cf648ccbb-wj8jt" May 17 00:30:44.449770 kubelet[2529]: E0517 00:30:44.449735 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cf648ccbb-wj8jt_calico-apiserver(7e0aafb5-c219-4523-9e5b-1fe312a4aa2d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cf648ccbb-wj8jt_calico-apiserver(7e0aafb5-c219-4523-9e5b-1fe312a4aa2d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cf648ccbb-wj8jt" podUID="7e0aafb5-c219-4523-9e5b-1fe312a4aa2d" May 17 00:30:44.450705 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a-shm.mount: Deactivated successfully. May 17 00:30:44.464750 containerd[1476]: time="2025-05-17T00:30:44.464673939Z" level=error msg="Failed to destroy network for sandbox \"ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:44.466818 containerd[1476]: time="2025-05-17T00:30:44.466797697Z" level=error msg="encountered an error cleaning up failed sandbox \"ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:44.466990 containerd[1476]: time="2025-05-17T00:30:44.466971298Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cf648ccbb-chqjq,Uid:16a8edb6-95df-4bc2-a130-7cc52db94763,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:44.467387 kubelet[2529]: E0517 00:30:44.467222 2529 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:44.467387 kubelet[2529]: E0517 00:30:44.467268 2529 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cf648ccbb-chqjq" May 17 00:30:44.467387 kubelet[2529]: E0517 00:30:44.467283 2529 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cf648ccbb-chqjq" May 17 00:30:44.467860 kubelet[2529]: E0517 00:30:44.467313 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cf648ccbb-chqjq_calico-apiserver(16a8edb6-95df-4bc2-a130-7cc52db94763)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cf648ccbb-chqjq_calico-apiserver(16a8edb6-95df-4bc2-a130-7cc52db94763)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cf648ccbb-chqjq" podUID="16a8edb6-95df-4bc2-a130-7cc52db94763" May 17 00:30:44.468135 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe-shm.mount: Deactivated successfully. May 17 00:30:44.469502 containerd[1476]: time="2025-05-17T00:30:44.469119566Z" level=error msg="Failed to destroy network for sandbox \"e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:44.469794 containerd[1476]: time="2025-05-17T00:30:44.469747422Z" level=error msg="encountered an error cleaning up failed sandbox \"e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:44.469832 containerd[1476]: time="2025-05-17T00:30:44.469798172Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-s52mw,Uid:ee80876b-aa39-4375-a4e1-fd4e85f8d3ee,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:44.470611 kubelet[2529]: E0517 00:30:44.470573 2529 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:44.470663 kubelet[2529]: E0517 00:30:44.470641 2529 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-8f77d7b6c-s52mw" May 17 00:30:44.470788 kubelet[2529]: E0517 00:30:44.470763 2529 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-8f77d7b6c-s52mw" May 17 00:30:44.471373 kubelet[2529]: E0517 00:30:44.471318 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-8f77d7b6c-s52mw_calico-system(ee80876b-aa39-4375-a4e1-fd4e85f8d3ee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-8f77d7b6c-s52mw_calico-system(ee80876b-aa39-4375-a4e1-fd4e85f8d3ee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-8f77d7b6c-s52mw" podUID="ee80876b-aa39-4375-a4e1-fd4e85f8d3ee" May 17 00:30:44.496549 containerd[1476]: time="2025-05-17T00:30:44.496481816Z" level=error msg="Failed to destroy network for sandbox \"5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:44.496748 containerd[1476]: time="2025-05-17T00:30:44.496504826Z" level=error msg="Failed to destroy network for sandbox \"c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:44.497085 containerd[1476]: time="2025-05-17T00:30:44.496903289Z" level=error msg="encountered an error cleaning up failed sandbox \"5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:44.497085 containerd[1476]: time="2025-05-17T00:30:44.496953840Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57fb894b7c-5tcq4,Uid:05eee7e2-28d6-4731-8213-05c2e2b3f360,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:44.497085 containerd[1476]: time="2025-05-17T00:30:44.497020210Z" level=error msg="encountered an error cleaning up failed sandbox \"c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:44.497085 containerd[1476]: time="2025-05-17T00:30:44.497050740Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-96dc47b75-xvwdn,Uid:f71e5f0b-7c52-4c28-8833-5eea34a70a67,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:44.497733 kubelet[2529]: E0517 00:30:44.497701 2529 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:44.497918 kubelet[2529]: E0517 00:30:44.497898 2529 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-96dc47b75-xvwdn" May 17 00:30:44.497946 kubelet[2529]: E0517 00:30:44.497921 2529 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-96dc47b75-xvwdn" May 17 00:30:44.498060 kubelet[2529]: E0517 00:30:44.497968 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-96dc47b75-xvwdn_calico-system(f71e5f0b-7c52-4c28-8833-5eea34a70a67)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-96dc47b75-xvwdn_calico-system(f71e5f0b-7c52-4c28-8833-5eea34a70a67)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-96dc47b75-xvwdn" podUID="f71e5f0b-7c52-4c28-8833-5eea34a70a67" May 17 00:30:44.498342 kubelet[2529]: E0517 00:30:44.497381 2529 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:44.498377 kubelet[2529]: E0517 00:30:44.498351 2529 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-57fb894b7c-5tcq4" May 17 00:30:44.498377 kubelet[2529]: E0517 00:30:44.498366 2529 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-57fb894b7c-5tcq4" May 17 00:30:44.498418 kubelet[2529]: E0517 00:30:44.498390 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-57fb894b7c-5tcq4_calico-system(05eee7e2-28d6-4731-8213-05c2e2b3f360)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-57fb894b7c-5tcq4_calico-system(05eee7e2-28d6-4731-8213-05c2e2b3f360)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-57fb894b7c-5tcq4" podUID="05eee7e2-28d6-4731-8213-05c2e2b3f360" May 17 00:30:44.500413 containerd[1476]: time="2025-05-17T00:30:44.500294688Z" level=error msg="Failed to destroy network for sandbox \"07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:44.500838 containerd[1476]: time="2025-05-17T00:30:44.500791052Z" level=error msg="encountered an error cleaning up failed sandbox \"07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:44.500940 containerd[1476]: time="2025-05-17T00:30:44.500883782Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59c6b49969-lmb87,Uid:dd5a12e6-0476-4ee4-9663-5e2d40e20810,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:44.501286 kubelet[2529]: E0517 00:30:44.501270 2529 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:44.501336 kubelet[2529]: E0517 00:30:44.501293 2529 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59c6b49969-lmb87" May 17 00:30:44.501336 kubelet[2529]: E0517 00:30:44.501308 2529 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59c6b49969-lmb87" May 17 00:30:44.501405 kubelet[2529]: E0517 00:30:44.501330 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-59c6b49969-lmb87_calico-apiserver(dd5a12e6-0476-4ee4-9663-5e2d40e20810)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-59c6b49969-lmb87_calico-apiserver(dd5a12e6-0476-4ee4-9663-5e2d40e20810)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59c6b49969-lmb87" podUID="dd5a12e6-0476-4ee4-9663-5e2d40e20810" May 17 00:30:44.787384 kubelet[2529]: I0517 00:30:44.786517 2529 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341" May 17 00:30:44.787447 containerd[1476]: time="2025-05-17T00:30:44.787022199Z" level=info msg="StopPodSandbox for \"2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341\"" May 17 00:30:44.787447 containerd[1476]: time="2025-05-17T00:30:44.787130710Z" level=info msg="Ensure that sandbox 2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341 in task-service has been cleanup successfully" May 17 00:30:44.788309 kubelet[2529]: I0517 00:30:44.788296 2529 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a" May 17 00:30:44.789108 containerd[1476]: time="2025-05-17T00:30:44.789083427Z" level=info msg="StopPodSandbox for \"436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a\"" May 17 00:30:44.789252 containerd[1476]: time="2025-05-17T00:30:44.789177117Z" level=info msg="Ensure that sandbox 436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a in task-service has been cleanup successfully" May 17 00:30:44.791143 kubelet[2529]: I0517 00:30:44.790704 2529 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71" May 17 00:30:44.791622 containerd[1476]: time="2025-05-17T00:30:44.791604888Z" level=info msg="StopPodSandbox for \"d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71\"" May 17 00:30:44.792083 containerd[1476]: time="2025-05-17T00:30:44.792066212Z" level=info msg="Ensure that sandbox d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71 in task-service has been cleanup successfully" May 17 00:30:44.801716 containerd[1476]: time="2025-05-17T00:30:44.801690352Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 17 00:30:44.805868 kubelet[2529]: I0517 00:30:44.805827 2529 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0" May 17 00:30:44.807088 containerd[1476]: time="2025-05-17T00:30:44.807055487Z" level=info msg="StopPodSandbox for \"07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0\"" May 17 00:30:44.807238 containerd[1476]: time="2025-05-17T00:30:44.807180478Z" level=info msg="Ensure that sandbox 07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0 in task-service has been cleanup successfully" May 17 00:30:44.809690 kubelet[2529]: I0517 00:30:44.809233 2529 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b" May 17 00:30:44.810148 containerd[1476]: time="2025-05-17T00:30:44.810089653Z" level=info msg="StopPodSandbox for \"c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b\"" May 17 00:30:44.810285 containerd[1476]: time="2025-05-17T00:30:44.810206654Z" level=info msg="Ensure that sandbox c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b in task-service has been cleanup successfully" May 17 00:30:44.815746 kubelet[2529]: I0517 00:30:44.815674 2529 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d" May 17 00:30:44.816145 containerd[1476]: time="2025-05-17T00:30:44.815949892Z" level=info msg="StopPodSandbox for \"5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d\"" May 17 00:30:44.816753 containerd[1476]: time="2025-05-17T00:30:44.816561047Z" level=info msg="Ensure that sandbox 5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d in task-service has been cleanup successfully" May 17 00:30:44.819864 kubelet[2529]: I0517 00:30:44.819795 2529 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef" May 17 00:30:44.820584 containerd[1476]: time="2025-05-17T00:30:44.820560830Z" level=info msg="StopPodSandbox for \"e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef\"" May 17 00:30:44.820914 containerd[1476]: time="2025-05-17T00:30:44.820670541Z" level=info msg="Ensure that sandbox e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef in task-service has been cleanup successfully" May 17 00:30:44.822872 kubelet[2529]: I0517 00:30:44.822849 2529 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe" May 17 00:30:44.824041 containerd[1476]: time="2025-05-17T00:30:44.823693986Z" level=info msg="StopPodSandbox for \"ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe\"" May 17 00:30:44.824041 containerd[1476]: time="2025-05-17T00:30:44.823795267Z" level=info msg="Ensure that sandbox ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe in task-service has been cleanup successfully" May 17 00:30:44.864288 containerd[1476]: time="2025-05-17T00:30:44.864253786Z" level=error msg="StopPodSandbox for \"2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341\" failed" error="failed to destroy network for sandbox \"2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:44.864690 kubelet[2529]: E0517 00:30:44.864601 2529 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341" May 17 00:30:44.864690 kubelet[2529]: E0517 00:30:44.864650 2529 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341"} May 17 00:30:44.864774 kubelet[2529]: E0517 00:30:44.864706 2529 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"02488dc1-7388-4c3e-bda7-2622333fb0c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:30:44.864774 kubelet[2529]: E0517 00:30:44.864723 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"02488dc1-7388-4c3e-bda7-2622333fb0c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-wvn6c" podUID="02488dc1-7388-4c3e-bda7-2622333fb0c8" May 17 00:30:44.889970 containerd[1476]: time="2025-05-17T00:30:44.889834890Z" level=error msg="StopPodSandbox for \"436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a\" failed" error="failed to destroy network for sandbox \"436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:44.890166 kubelet[2529]: E0517 00:30:44.890114 2529 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a" May 17 00:30:44.890449 kubelet[2529]: E0517 00:30:44.890172 2529 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a"} May 17 00:30:44.890449 kubelet[2529]: E0517 00:30:44.890198 2529 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7e0aafb5-c219-4523-9e5b-1fe312a4aa2d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:30:44.890449 kubelet[2529]: E0517 00:30:44.890232 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7e0aafb5-c219-4523-9e5b-1fe312a4aa2d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cf648ccbb-wj8jt" podUID="7e0aafb5-c219-4523-9e5b-1fe312a4aa2d" May 17 00:30:44.891574 containerd[1476]: time="2025-05-17T00:30:44.891270343Z" level=error msg="StopPodSandbox for \"d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71\" failed" error="failed to destroy network for sandbox \"d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:44.891607 kubelet[2529]: E0517 00:30:44.891390 2529 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71" May 17 00:30:44.891607 kubelet[2529]: E0517 00:30:44.891408 2529 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71"} May 17 00:30:44.891607 kubelet[2529]: E0517 00:30:44.891435 2529 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"eb74581d-78ad-4419-8238-b440c64be7cd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:30:44.891607 kubelet[2529]: E0517 00:30:44.891473 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"eb74581d-78ad-4419-8238-b440c64be7cd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-q7n6z" podUID="eb74581d-78ad-4419-8238-b440c64be7cd" May 17 00:30:44.895408 containerd[1476]: time="2025-05-17T00:30:44.895364157Z" level=error msg="StopPodSandbox for \"07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0\" failed" error="failed to destroy network for sandbox \"07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:44.895743 kubelet[2529]: E0517 00:30:44.895506 2529 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0" May 17 00:30:44.895743 kubelet[2529]: E0517 00:30:44.895539 2529 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0"} May 17 00:30:44.895743 kubelet[2529]: E0517 00:30:44.895557 2529 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dd5a12e6-0476-4ee4-9663-5e2d40e20810\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:30:44.895743 kubelet[2529]: E0517 00:30:44.895571 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dd5a12e6-0476-4ee4-9663-5e2d40e20810\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59c6b49969-lmb87" podUID="dd5a12e6-0476-4ee4-9663-5e2d40e20810" May 17 00:30:44.896800 containerd[1476]: time="2025-05-17T00:30:44.896768319Z" level=error msg="StopPodSandbox for \"e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef\" failed" error="failed to destroy network for sandbox \"e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:44.896968 containerd[1476]: time="2025-05-17T00:30:44.896951320Z" level=error msg="StopPodSandbox for \"c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b\" failed" error="failed to destroy network for sandbox \"c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:44.897002 kubelet[2529]: E0517 00:30:44.896967 2529 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef" May 17 00:30:44.897002 kubelet[2529]: E0517 00:30:44.896994 2529 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef"} May 17 00:30:44.897041 kubelet[2529]: E0517 00:30:44.897010 2529 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ee80876b-aa39-4375-a4e1-fd4e85f8d3ee\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:30:44.897041 kubelet[2529]: E0517 00:30:44.897027 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ee80876b-aa39-4375-a4e1-fd4e85f8d3ee\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-8f77d7b6c-s52mw" podUID="ee80876b-aa39-4375-a4e1-fd4e85f8d3ee" May 17 00:30:44.897244 kubelet[2529]: E0517 00:30:44.897218 2529 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b" May 17 00:30:44.897587 kubelet[2529]: E0517 00:30:44.897550 2529 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b"} May 17 00:30:44.897587 kubelet[2529]: E0517 00:30:44.897576 2529 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f71e5f0b-7c52-4c28-8833-5eea34a70a67\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:30:44.897648 kubelet[2529]: E0517 00:30:44.897589 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f71e5f0b-7c52-4c28-8833-5eea34a70a67\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-96dc47b75-xvwdn" podUID="f71e5f0b-7c52-4c28-8833-5eea34a70a67" May 17 00:30:44.897819 containerd[1476]: time="2025-05-17T00:30:44.897770647Z" level=error msg="StopPodSandbox for \"5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d\" failed" error="failed to destroy network for sandbox \"5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:44.898100 kubelet[2529]: E0517 00:30:44.898024 2529 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d" May 17 00:30:44.898100 kubelet[2529]: E0517 00:30:44.898043 2529 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d"} May 17 00:30:44.898100 kubelet[2529]: E0517 00:30:44.898059 2529 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"05eee7e2-28d6-4731-8213-05c2e2b3f360\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:30:44.898100 kubelet[2529]: E0517 00:30:44.898073 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"05eee7e2-28d6-4731-8213-05c2e2b3f360\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-57fb894b7c-5tcq4" podUID="05eee7e2-28d6-4731-8213-05c2e2b3f360" May 17 00:30:44.902490 containerd[1476]: time="2025-05-17T00:30:44.902460826Z" level=error msg="StopPodSandbox for \"ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe\" failed" error="failed to destroy network for sandbox \"ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:44.902575 kubelet[2529]: E0517 00:30:44.902556 2529 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe" May 17 00:30:44.902604 kubelet[2529]: E0517 00:30:44.902578 2529 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe"} May 17 00:30:44.902640 kubelet[2529]: E0517 00:30:44.902594 2529 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"16a8edb6-95df-4bc2-a130-7cc52db94763\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:30:44.902640 kubelet[2529]: E0517 00:30:44.902617 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"16a8edb6-95df-4bc2-a130-7cc52db94763\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cf648ccbb-chqjq" podUID="16a8edb6-95df-4bc2-a130-7cc52db94763" May 17 00:30:45.314581 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0-shm.mount: Deactivated successfully. May 17 00:30:45.314912 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d-shm.mount: Deactivated successfully. May 17 00:30:45.315062 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b-shm.mount: Deactivated successfully. May 17 00:30:45.315218 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef-shm.mount: Deactivated successfully. May 17 00:30:45.698476 systemd[1]: Created slice kubepods-besteffort-pod0996e84d_dd0b_49e3_addd_0931e48a258e.slice - libcontainer container kubepods-besteffort-pod0996e84d_dd0b_49e3_addd_0931e48a258e.slice. May 17 00:30:45.700633 containerd[1476]: time="2025-05-17T00:30:45.700594707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h9kj7,Uid:0996e84d-dd0b-49e3-addd-0931e48a258e,Namespace:calico-system,Attempt:0,}" May 17 00:30:45.770291 containerd[1476]: time="2025-05-17T00:30:45.770129453Z" level=error msg="Failed to destroy network for sandbox \"e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:45.771396 containerd[1476]: time="2025-05-17T00:30:45.770669487Z" level=error msg="encountered an error cleaning up failed sandbox \"e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:45.771396 containerd[1476]: time="2025-05-17T00:30:45.770733407Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h9kj7,Uid:0996e84d-dd0b-49e3-addd-0931e48a258e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:45.772579 kubelet[2529]: E0517 00:30:45.772526 2529 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:45.772657 kubelet[2529]: E0517 00:30:45.772606 2529 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h9kj7" May 17 00:30:45.772657 kubelet[2529]: E0517 00:30:45.772625 2529 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h9kj7" May 17 00:30:45.772701 kubelet[2529]: E0517 00:30:45.772676 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-h9kj7_calico-system(0996e84d-dd0b-49e3-addd-0931e48a258e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-h9kj7_calico-system(0996e84d-dd0b-49e3-addd-0931e48a258e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h9kj7" podUID="0996e84d-dd0b-49e3-addd-0931e48a258e" May 17 00:30:45.774155 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86-shm.mount: Deactivated successfully. May 17 00:30:45.831258 kubelet[2529]: I0517 00:30:45.831233 2529 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86" May 17 00:30:45.832633 containerd[1476]: time="2025-05-17T00:30:45.832610793Z" level=info msg="StopPodSandbox for \"e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86\"" May 17 00:30:45.832896 containerd[1476]: time="2025-05-17T00:30:45.832881636Z" level=info msg="Ensure that sandbox e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86 in task-service has been cleanup successfully" May 17 00:30:45.862518 containerd[1476]: time="2025-05-17T00:30:45.862476348Z" level=error msg="StopPodSandbox for \"e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86\" failed" error="failed to destroy network for sandbox \"e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:30:45.862652 kubelet[2529]: E0517 00:30:45.862617 2529 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86" May 17 00:30:45.862730 kubelet[2529]: E0517 00:30:45.862659 2529 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86"} May 17 00:30:45.862730 kubelet[2529]: E0517 00:30:45.862687 2529 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0996e84d-dd0b-49e3-addd-0931e48a258e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:30:45.862730 kubelet[2529]: E0517 00:30:45.862706 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0996e84d-dd0b-49e3-addd-0931e48a258e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h9kj7" podUID="0996e84d-dd0b-49e3-addd-0931e48a258e" May 17 00:30:47.900335 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2012258912.mount: Deactivated successfully. May 17 00:30:47.923942 containerd[1476]: time="2025-05-17T00:30:47.922504578Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:47.923942 containerd[1476]: time="2025-05-17T00:30:47.923183702Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=156396372" May 17 00:30:47.923942 containerd[1476]: time="2025-05-17T00:30:47.923887497Z" level=info msg="ImageCreate event name:\"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:47.925505 containerd[1476]: time="2025-05-17T00:30:47.925162666Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:47.925760 containerd[1476]: time="2025-05-17T00:30:47.925726380Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"156396234\" in 3.123948837s" May 17 00:30:47.925760 containerd[1476]: time="2025-05-17T00:30:47.925757590Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\"" May 17 00:30:47.943071 containerd[1476]: time="2025-05-17T00:30:47.943049140Z" level=info msg="CreateContainer within sandbox \"797e184da22ec60687712cf3abe6e2c52d537e7e00acb4c08383138c4d843a5b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 17 00:30:47.953264 containerd[1476]: time="2025-05-17T00:30:47.953233520Z" level=info msg="CreateContainer within sandbox \"797e184da22ec60687712cf3abe6e2c52d537e7e00acb4c08383138c4d843a5b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"02493e37174686715eb0023272e65a12d0d9652b55bf8af5e5bb0dee61b5cfcd\"" May 17 00:30:47.953772 containerd[1476]: time="2025-05-17T00:30:47.953754964Z" level=info msg="StartContainer for \"02493e37174686715eb0023272e65a12d0d9652b55bf8af5e5bb0dee61b5cfcd\"" May 17 00:30:47.987576 systemd[1]: Started cri-containerd-02493e37174686715eb0023272e65a12d0d9652b55bf8af5e5bb0dee61b5cfcd.scope - libcontainer container 02493e37174686715eb0023272e65a12d0d9652b55bf8af5e5bb0dee61b5cfcd. May 17 00:30:48.016590 containerd[1476]: time="2025-05-17T00:30:48.016566410Z" level=info msg="StartContainer for \"02493e37174686715eb0023272e65a12d0d9652b55bf8af5e5bb0dee61b5cfcd\" returns successfully" May 17 00:30:48.096777 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 17 00:30:48.096844 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 17 00:30:48.169594 containerd[1476]: time="2025-05-17T00:30:48.168834826Z" level=info msg="StopPodSandbox for \"5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d\"" May 17 00:30:48.332829 containerd[1476]: 2025-05-17 00:30:48.259 [INFO][3750] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d" May 17 00:30:48.332829 containerd[1476]: 2025-05-17 00:30:48.259 [INFO][3750] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d" iface="eth0" netns="/var/run/netns/cni-63fd66b0-ea70-4a8f-958b-4c15378633a4" May 17 00:30:48.332829 containerd[1476]: 2025-05-17 00:30:48.261 [INFO][3750] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d" iface="eth0" netns="/var/run/netns/cni-63fd66b0-ea70-4a8f-958b-4c15378633a4" May 17 00:30:48.332829 containerd[1476]: 2025-05-17 00:30:48.262 [INFO][3750] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d" iface="eth0" netns="/var/run/netns/cni-63fd66b0-ea70-4a8f-958b-4c15378633a4" May 17 00:30:48.332829 containerd[1476]: 2025-05-17 00:30:48.262 [INFO][3750] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d" May 17 00:30:48.332829 containerd[1476]: 2025-05-17 00:30:48.262 [INFO][3750] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d" May 17 00:30:48.332829 containerd[1476]: 2025-05-17 00:30:48.303 [INFO][3758] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d" HandleID="k8s-pod-network.5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d" Workload="172--232--0--241-k8s-whisker--57fb894b7c--5tcq4-eth0" May 17 00:30:48.332829 containerd[1476]: 2025-05-17 00:30:48.304 [INFO][3758] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:30:48.332829 containerd[1476]: 2025-05-17 00:30:48.304 [INFO][3758] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:30:48.332829 containerd[1476]: 2025-05-17 00:30:48.322 [WARNING][3758] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d" HandleID="k8s-pod-network.5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d" Workload="172--232--0--241-k8s-whisker--57fb894b7c--5tcq4-eth0" May 17 00:30:48.332829 containerd[1476]: 2025-05-17 00:30:48.322 [INFO][3758] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d" HandleID="k8s-pod-network.5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d" Workload="172--232--0--241-k8s-whisker--57fb894b7c--5tcq4-eth0" May 17 00:30:48.332829 containerd[1476]: 2025-05-17 00:30:48.324 [INFO][3758] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:30:48.332829 containerd[1476]: 2025-05-17 00:30:48.329 [INFO][3750] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d" May 17 00:30:48.333208 containerd[1476]: time="2025-05-17T00:30:48.332991138Z" level=info msg="TearDown network for sandbox \"5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d\" successfully" May 17 00:30:48.333208 containerd[1476]: time="2025-05-17T00:30:48.333017598Z" level=info msg="StopPodSandbox for \"5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d\" returns successfully" May 17 00:30:48.352091 kubelet[2529]: I0517 00:30:48.351403 2529 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05eee7e2-28d6-4731-8213-05c2e2b3f360-whisker-ca-bundle\") pod \"05eee7e2-28d6-4731-8213-05c2e2b3f360\" (UID: \"05eee7e2-28d6-4731-8213-05c2e2b3f360\") " May 17 00:30:48.352091 kubelet[2529]: I0517 00:30:48.351472 2529 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbsbj\" (UniqueName: \"kubernetes.io/projected/05eee7e2-28d6-4731-8213-05c2e2b3f360-kube-api-access-rbsbj\") pod \"05eee7e2-28d6-4731-8213-05c2e2b3f360\" (UID: \"05eee7e2-28d6-4731-8213-05c2e2b3f360\") " May 17 00:30:48.352091 kubelet[2529]: I0517 00:30:48.351499 2529 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/05eee7e2-28d6-4731-8213-05c2e2b3f360-whisker-backend-key-pair\") pod \"05eee7e2-28d6-4731-8213-05c2e2b3f360\" (UID: \"05eee7e2-28d6-4731-8213-05c2e2b3f360\") " May 17 00:30:48.352091 kubelet[2529]: I0517 00:30:48.351819 2529 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05eee7e2-28d6-4731-8213-05c2e2b3f360-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "05eee7e2-28d6-4731-8213-05c2e2b3f360" (UID: "05eee7e2-28d6-4731-8213-05c2e2b3f360"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 17 00:30:48.360797 kubelet[2529]: I0517 00:30:48.360764 2529 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05eee7e2-28d6-4731-8213-05c2e2b3f360-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "05eee7e2-28d6-4731-8213-05c2e2b3f360" (UID: "05eee7e2-28d6-4731-8213-05c2e2b3f360"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" May 17 00:30:48.360952 kubelet[2529]: I0517 00:30:48.360912 2529 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05eee7e2-28d6-4731-8213-05c2e2b3f360-kube-api-access-rbsbj" (OuterVolumeSpecName: "kube-api-access-rbsbj") pod "05eee7e2-28d6-4731-8213-05c2e2b3f360" (UID: "05eee7e2-28d6-4731-8213-05c2e2b3f360"). InnerVolumeSpecName "kube-api-access-rbsbj". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:30:48.452323 kubelet[2529]: I0517 00:30:48.452164 2529 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbsbj\" (UniqueName: \"kubernetes.io/projected/05eee7e2-28d6-4731-8213-05c2e2b3f360-kube-api-access-rbsbj\") on node \"172-232-0-241\" DevicePath \"\"" May 17 00:30:48.452960 kubelet[2529]: I0517 00:30:48.452930 2529 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/05eee7e2-28d6-4731-8213-05c2e2b3f360-whisker-backend-key-pair\") on node \"172-232-0-241\" DevicePath \"\"" May 17 00:30:48.452960 kubelet[2529]: I0517 00:30:48.452947 2529 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05eee7e2-28d6-4731-8213-05c2e2b3f360-whisker-ca-bundle\") on node \"172-232-0-241\" DevicePath \"\"" May 17 00:30:48.699745 systemd[1]: Removed slice kubepods-besteffort-pod05eee7e2_28d6_4731_8213_05c2e2b3f360.slice - libcontainer container kubepods-besteffort-pod05eee7e2_28d6_4731_8213_05c2e2b3f360.slice. May 17 00:30:48.860002 kubelet[2529]: I0517 00:30:48.858825 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-v5gn8" podStartSLOduration=1.857815343 podStartE2EDuration="9.8588089s" podCreationTimestamp="2025-05-17 00:30:39 +0000 UTC" firstStartedPulling="2025-05-17 00:30:39.925924301 +0000 UTC m=+17.326414092" lastFinishedPulling="2025-05-17 00:30:47.926917848 +0000 UTC m=+25.327407649" observedRunningTime="2025-05-17 00:30:48.858265117 +0000 UTC m=+26.258754918" watchObservedRunningTime="2025-05-17 00:30:48.8588089 +0000 UTC m=+26.259298701" May 17 00:30:48.905317 systemd[1]: run-netns-cni\x2d63fd66b0\x2dea70\x2d4a8f\x2d958b\x2d4c15378633a4.mount: Deactivated successfully. May 17 00:30:48.905418 systemd[1]: var-lib-kubelet-pods-05eee7e2\x2d28d6\x2d4731\x2d8213\x2d05c2e2b3f360-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drbsbj.mount: Deactivated successfully. May 17 00:30:48.905512 systemd[1]: var-lib-kubelet-pods-05eee7e2\x2d28d6\x2d4731\x2d8213\x2d05c2e2b3f360-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 17 00:30:48.911768 kubelet[2529]: W0517 00:30:48.910537 2529 reflector.go:561] object-"calico-system"/"whisker-ca-bundle": failed to list *v1.ConfigMap: configmaps "whisker-ca-bundle" is forbidden: User "system:node:172-232-0-241" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node '172-232-0-241' and this object May 17 00:30:48.911768 kubelet[2529]: E0517 00:30:48.910584 2529 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"whisker-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:172-232-0-241\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node '172-232-0-241' and this object" logger="UnhandledError" May 17 00:30:48.911976 kubelet[2529]: W0517 00:30:48.911958 2529 reflector.go:561] object-"calico-system"/"whisker-backend-key-pair": failed to list *v1.Secret: secrets "whisker-backend-key-pair" is forbidden: User "system:node:172-232-0-241" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node '172-232-0-241' and this object May 17 00:30:48.912243 kubelet[2529]: E0517 00:30:48.912210 2529 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"whisker-backend-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"whisker-backend-key-pair\" is forbidden: User \"system:node:172-232-0-241\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node '172-232-0-241' and this object" logger="UnhandledError" May 17 00:30:48.918069 systemd[1]: Created slice kubepods-besteffort-poda77cac63_6e4c_448a_ad97_4b194bdcbe50.slice - libcontainer container kubepods-besteffort-poda77cac63_6e4c_448a_ad97_4b194bdcbe50.slice. May 17 00:30:49.057003 kubelet[2529]: I0517 00:30:49.056934 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a77cac63-6e4c-448a-ad97-4b194bdcbe50-whisker-ca-bundle\") pod \"whisker-767b6d8985-vppnt\" (UID: \"a77cac63-6e4c-448a-ad97-4b194bdcbe50\") " pod="calico-system/whisker-767b6d8985-vppnt" May 17 00:30:49.057003 kubelet[2529]: I0517 00:30:49.056969 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkjvk\" (UniqueName: \"kubernetes.io/projected/a77cac63-6e4c-448a-ad97-4b194bdcbe50-kube-api-access-hkjvk\") pod \"whisker-767b6d8985-vppnt\" (UID: \"a77cac63-6e4c-448a-ad97-4b194bdcbe50\") " pod="calico-system/whisker-767b6d8985-vppnt" May 17 00:30:49.057003 kubelet[2529]: I0517 00:30:49.056985 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a77cac63-6e4c-448a-ad97-4b194bdcbe50-whisker-backend-key-pair\") pod \"whisker-767b6d8985-vppnt\" (UID: \"a77cac63-6e4c-448a-ad97-4b194bdcbe50\") " pod="calico-system/whisker-767b6d8985-vppnt" May 17 00:30:50.158410 kubelet[2529]: E0517 00:30:50.158363 2529 secret.go:189] Couldn't get secret calico-system/whisker-backend-key-pair: failed to sync secret cache: timed out waiting for the condition May 17 00:30:50.159039 kubelet[2529]: E0517 00:30:50.158452 2529 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a77cac63-6e4c-448a-ad97-4b194bdcbe50-whisker-backend-key-pair podName:a77cac63-6e4c-448a-ad97-4b194bdcbe50 nodeName:}" failed. No retries permitted until 2025-05-17 00:30:50.658420683 +0000 UTC m=+28.058910484 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-backend-key-pair" (UniqueName: "kubernetes.io/secret/a77cac63-6e4c-448a-ad97-4b194bdcbe50-whisker-backend-key-pair") pod "whisker-767b6d8985-vppnt" (UID: "a77cac63-6e4c-448a-ad97-4b194bdcbe50") : failed to sync secret cache: timed out waiting for the condition May 17 00:30:50.694592 kubelet[2529]: I0517 00:30:50.694563 2529 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05eee7e2-28d6-4731-8213-05c2e2b3f360" path="/var/lib/kubelet/pods/05eee7e2-28d6-4731-8213-05c2e2b3f360/volumes" May 17 00:30:50.721247 containerd[1476]: time="2025-05-17T00:30:50.720901182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-767b6d8985-vppnt,Uid:a77cac63-6e4c-448a-ad97-4b194bdcbe50,Namespace:calico-system,Attempt:0,}" May 17 00:30:50.882106 systemd-networkd[1399]: calid73ea30d68d: Link UP May 17 00:30:50.882296 systemd-networkd[1399]: calid73ea30d68d: Gained carrier May 17 00:30:50.894169 containerd[1476]: 2025-05-17 00:30:50.775 [INFO][3929] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:30:50.894169 containerd[1476]: 2025-05-17 00:30:50.793 [INFO][3929] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--0--241-k8s-whisker--767b6d8985--vppnt-eth0 whisker-767b6d8985- calico-system a77cac63-6e4c-448a-ad97-4b194bdcbe50 928 0 2025-05-17 00:30:48 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:767b6d8985 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-232-0-241 whisker-767b6d8985-vppnt eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calid73ea30d68d [] [] }} ContainerID="a6b8118f72504c959c1bbdd713bb431481cb0778c4d971f1de8ec9a234e649b2" Namespace="calico-system" Pod="whisker-767b6d8985-vppnt" WorkloadEndpoint="172--232--0--241-k8s-whisker--767b6d8985--vppnt-" May 17 00:30:50.894169 containerd[1476]: 2025-05-17 00:30:50.793 [INFO][3929] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a6b8118f72504c959c1bbdd713bb431481cb0778c4d971f1de8ec9a234e649b2" Namespace="calico-system" Pod="whisker-767b6d8985-vppnt" WorkloadEndpoint="172--232--0--241-k8s-whisker--767b6d8985--vppnt-eth0" May 17 00:30:50.894169 containerd[1476]: 2025-05-17 00:30:50.835 [INFO][3953] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a6b8118f72504c959c1bbdd713bb431481cb0778c4d971f1de8ec9a234e649b2" HandleID="k8s-pod-network.a6b8118f72504c959c1bbdd713bb431481cb0778c4d971f1de8ec9a234e649b2" Workload="172--232--0--241-k8s-whisker--767b6d8985--vppnt-eth0" May 17 00:30:50.894169 containerd[1476]: 2025-05-17 00:30:50.835 [INFO][3953] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a6b8118f72504c959c1bbdd713bb431481cb0778c4d971f1de8ec9a234e649b2" HandleID="k8s-pod-network.a6b8118f72504c959c1bbdd713bb431481cb0778c4d971f1de8ec9a234e649b2" Workload="172--232--0--241-k8s-whisker--767b6d8985--vppnt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9940), Attrs:map[string]string{"namespace":"calico-system", "node":"172-232-0-241", "pod":"whisker-767b6d8985-vppnt", "timestamp":"2025-05-17 00:30:50.834304107 +0000 UTC"}, Hostname:"172-232-0-241", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:30:50.894169 containerd[1476]: 2025-05-17 00:30:50.836 [INFO][3953] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:30:50.894169 containerd[1476]: 2025-05-17 00:30:50.836 [INFO][3953] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:30:50.894169 containerd[1476]: 2025-05-17 00:30:50.836 [INFO][3953] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-0-241' May 17 00:30:50.894169 containerd[1476]: 2025-05-17 00:30:50.844 [INFO][3953] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a6b8118f72504c959c1bbdd713bb431481cb0778c4d971f1de8ec9a234e649b2" host="172-232-0-241" May 17 00:30:50.894169 containerd[1476]: 2025-05-17 00:30:50.849 [INFO][3953] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-0-241" May 17 00:30:50.894169 containerd[1476]: 2025-05-17 00:30:50.855 [INFO][3953] ipam/ipam.go 511: Trying affinity for 192.168.114.128/26 host="172-232-0-241" May 17 00:30:50.894169 containerd[1476]: 2025-05-17 00:30:50.856 [INFO][3953] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.128/26 host="172-232-0-241" May 17 00:30:50.894169 containerd[1476]: 2025-05-17 00:30:50.858 [INFO][3953] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.128/26 host="172-232-0-241" May 17 00:30:50.894169 containerd[1476]: 2025-05-17 00:30:50.858 [INFO][3953] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.114.128/26 handle="k8s-pod-network.a6b8118f72504c959c1bbdd713bb431481cb0778c4d971f1de8ec9a234e649b2" host="172-232-0-241" May 17 00:30:50.894169 containerd[1476]: 2025-05-17 00:30:50.859 [INFO][3953] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a6b8118f72504c959c1bbdd713bb431481cb0778c4d971f1de8ec9a234e649b2 May 17 00:30:50.894169 containerd[1476]: 2025-05-17 00:30:50.863 [INFO][3953] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.114.128/26 handle="k8s-pod-network.a6b8118f72504c959c1bbdd713bb431481cb0778c4d971f1de8ec9a234e649b2" host="172-232-0-241" May 17 00:30:50.894169 containerd[1476]: 2025-05-17 00:30:50.867 [INFO][3953] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.114.129/26] block=192.168.114.128/26 handle="k8s-pod-network.a6b8118f72504c959c1bbdd713bb431481cb0778c4d971f1de8ec9a234e649b2" host="172-232-0-241" May 17 00:30:50.894169 containerd[1476]: 2025-05-17 00:30:50.868 [INFO][3953] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.129/26] handle="k8s-pod-network.a6b8118f72504c959c1bbdd713bb431481cb0778c4d971f1de8ec9a234e649b2" host="172-232-0-241" May 17 00:30:50.894169 containerd[1476]: 2025-05-17 00:30:50.868 [INFO][3953] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:30:50.894169 containerd[1476]: 2025-05-17 00:30:50.868 [INFO][3953] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.114.129/26] IPv6=[] ContainerID="a6b8118f72504c959c1bbdd713bb431481cb0778c4d971f1de8ec9a234e649b2" HandleID="k8s-pod-network.a6b8118f72504c959c1bbdd713bb431481cb0778c4d971f1de8ec9a234e649b2" Workload="172--232--0--241-k8s-whisker--767b6d8985--vppnt-eth0" May 17 00:30:50.894678 containerd[1476]: 2025-05-17 00:30:50.871 [INFO][3929] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a6b8118f72504c959c1bbdd713bb431481cb0778c4d971f1de8ec9a234e649b2" Namespace="calico-system" Pod="whisker-767b6d8985-vppnt" WorkloadEndpoint="172--232--0--241-k8s-whisker--767b6d8985--vppnt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--0--241-k8s-whisker--767b6d8985--vppnt-eth0", GenerateName:"whisker-767b6d8985-", Namespace:"calico-system", SelfLink:"", UID:"a77cac63-6e4c-448a-ad97-4b194bdcbe50", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 30, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"767b6d8985", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-0-241", ContainerID:"", Pod:"whisker-767b6d8985-vppnt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.114.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid73ea30d68d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:30:50.894678 containerd[1476]: 2025-05-17 00:30:50.871 [INFO][3929] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.129/32] ContainerID="a6b8118f72504c959c1bbdd713bb431481cb0778c4d971f1de8ec9a234e649b2" Namespace="calico-system" Pod="whisker-767b6d8985-vppnt" WorkloadEndpoint="172--232--0--241-k8s-whisker--767b6d8985--vppnt-eth0" May 17 00:30:50.894678 containerd[1476]: 2025-05-17 00:30:50.871 [INFO][3929] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid73ea30d68d ContainerID="a6b8118f72504c959c1bbdd713bb431481cb0778c4d971f1de8ec9a234e649b2" Namespace="calico-system" Pod="whisker-767b6d8985-vppnt" WorkloadEndpoint="172--232--0--241-k8s-whisker--767b6d8985--vppnt-eth0" May 17 00:30:50.894678 containerd[1476]: 2025-05-17 00:30:50.882 [INFO][3929] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a6b8118f72504c959c1bbdd713bb431481cb0778c4d971f1de8ec9a234e649b2" Namespace="calico-system" Pod="whisker-767b6d8985-vppnt" WorkloadEndpoint="172--232--0--241-k8s-whisker--767b6d8985--vppnt-eth0" May 17 00:30:50.894678 containerd[1476]: 2025-05-17 00:30:50.882 [INFO][3929] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a6b8118f72504c959c1bbdd713bb431481cb0778c4d971f1de8ec9a234e649b2" Namespace="calico-system" Pod="whisker-767b6d8985-vppnt" WorkloadEndpoint="172--232--0--241-k8s-whisker--767b6d8985--vppnt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--0--241-k8s-whisker--767b6d8985--vppnt-eth0", GenerateName:"whisker-767b6d8985-", Namespace:"calico-system", SelfLink:"", UID:"a77cac63-6e4c-448a-ad97-4b194bdcbe50", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 30, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"767b6d8985", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-0-241", ContainerID:"a6b8118f72504c959c1bbdd713bb431481cb0778c4d971f1de8ec9a234e649b2", Pod:"whisker-767b6d8985-vppnt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.114.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid73ea30d68d", MAC:"7e:62:b7:b3:2f:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:30:50.894678 containerd[1476]: 2025-05-17 00:30:50.891 [INFO][3929] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a6b8118f72504c959c1bbdd713bb431481cb0778c4d971f1de8ec9a234e649b2" Namespace="calico-system" Pod="whisker-767b6d8985-vppnt" WorkloadEndpoint="172--232--0--241-k8s-whisker--767b6d8985--vppnt-eth0" May 17 00:30:50.907612 containerd[1476]: time="2025-05-17T00:30:50.907207361Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:30:50.907612 containerd[1476]: time="2025-05-17T00:30:50.907246882Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:30:50.907612 containerd[1476]: time="2025-05-17T00:30:50.907257702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:30:50.907612 containerd[1476]: time="2025-05-17T00:30:50.907316002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:30:50.930550 systemd[1]: Started cri-containerd-a6b8118f72504c959c1bbdd713bb431481cb0778c4d971f1de8ec9a234e649b2.scope - libcontainer container a6b8118f72504c959c1bbdd713bb431481cb0778c4d971f1de8ec9a234e649b2. May 17 00:30:50.958300 containerd[1476]: time="2025-05-17T00:30:50.958168971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-767b6d8985-vppnt,Uid:a77cac63-6e4c-448a-ad97-4b194bdcbe50,Namespace:calico-system,Attempt:0,} returns sandbox id \"a6b8118f72504c959c1bbdd713bb431481cb0778c4d971f1de8ec9a234e649b2\"" May 17 00:30:50.960663 containerd[1476]: time="2025-05-17T00:30:50.960493734Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:30:51.093600 containerd[1476]: time="2025-05-17T00:30:51.093575029Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:30:51.094441 containerd[1476]: time="2025-05-17T00:30:51.094407904Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:30:51.094700 containerd[1476]: time="2025-05-17T00:30:51.094468764Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:30:51.094735 kubelet[2529]: E0517 00:30:51.094545 2529 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:30:51.094735 kubelet[2529]: E0517 00:30:51.094578 2529 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:30:51.095673 kubelet[2529]: E0517 00:30:51.095642 2529 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:be8615eacac5472da34b065a5f473380,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hkjvk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-767b6d8985-vppnt_calico-system(a77cac63-6e4c-448a-ad97-4b194bdcbe50): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:30:51.097263 containerd[1476]: time="2025-05-17T00:30:51.097143658Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:30:51.197272 containerd[1476]: time="2025-05-17T00:30:51.197200592Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:30:51.198419 containerd[1476]: time="2025-05-17T00:30:51.198384528Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:30:51.198572 containerd[1476]: time="2025-05-17T00:30:51.198464158Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:30:51.198777 kubelet[2529]: E0517 00:30:51.198717 2529 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:30:51.199316 kubelet[2529]: E0517 00:30:51.198798 2529 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:30:51.199376 kubelet[2529]: E0517 00:30:51.198946 2529 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hkjvk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-767b6d8985-vppnt_calico-system(a77cac63-6e4c-448a-ad97-4b194bdcbe50): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:30:51.200483 kubelet[2529]: E0517 00:30:51.200383 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-767b6d8985-vppnt" podUID="a77cac63-6e4c-448a-ad97-4b194bdcbe50" May 17 00:30:51.674953 systemd[1]: run-containerd-runc-k8s.io-a6b8118f72504c959c1bbdd713bb431481cb0778c4d971f1de8ec9a234e649b2-runc.H3C9GN.mount: Deactivated successfully. May 17 00:30:51.849545 kubelet[2529]: E0517 00:30:51.849494 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-767b6d8985-vppnt" podUID="a77cac63-6e4c-448a-ad97-4b194bdcbe50" May 17 00:30:51.938558 systemd-networkd[1399]: calid73ea30d68d: Gained IPv6LL May 17 00:30:52.853118 kubelet[2529]: E0517 00:30:52.853068 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-767b6d8985-vppnt" podUID="a77cac63-6e4c-448a-ad97-4b194bdcbe50" May 17 00:30:55.692505 containerd[1476]: time="2025-05-17T00:30:55.692310122Z" level=info msg="StopPodSandbox for \"d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71\"" May 17 00:30:55.757215 containerd[1476]: 2025-05-17 00:30:55.726 [INFO][4105] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71" May 17 00:30:55.757215 containerd[1476]: 2025-05-17 00:30:55.726 [INFO][4105] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71" iface="eth0" netns="/var/run/netns/cni-c73c3a02-c117-788e-9ee5-0802d0828fad" May 17 00:30:55.757215 containerd[1476]: 2025-05-17 00:30:55.726 [INFO][4105] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71" iface="eth0" netns="/var/run/netns/cni-c73c3a02-c117-788e-9ee5-0802d0828fad" May 17 00:30:55.757215 containerd[1476]: 2025-05-17 00:30:55.726 [INFO][4105] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71" iface="eth0" netns="/var/run/netns/cni-c73c3a02-c117-788e-9ee5-0802d0828fad" May 17 00:30:55.757215 containerd[1476]: 2025-05-17 00:30:55.726 [INFO][4105] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71" May 17 00:30:55.757215 containerd[1476]: 2025-05-17 00:30:55.726 [INFO][4105] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71" May 17 00:30:55.757215 containerd[1476]: 2025-05-17 00:30:55.745 [INFO][4112] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71" HandleID="k8s-pod-network.d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71" Workload="172--232--0--241-k8s-coredns--7c65d6cfc9--q7n6z-eth0" May 17 00:30:55.757215 containerd[1476]: 2025-05-17 00:30:55.746 [INFO][4112] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:30:55.757215 containerd[1476]: 2025-05-17 00:30:55.746 [INFO][4112] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:30:55.757215 containerd[1476]: 2025-05-17 00:30:55.752 [WARNING][4112] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71" HandleID="k8s-pod-network.d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71" Workload="172--232--0--241-k8s-coredns--7c65d6cfc9--q7n6z-eth0" May 17 00:30:55.757215 containerd[1476]: 2025-05-17 00:30:55.752 [INFO][4112] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71" HandleID="k8s-pod-network.d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71" Workload="172--232--0--241-k8s-coredns--7c65d6cfc9--q7n6z-eth0" May 17 00:30:55.757215 containerd[1476]: 2025-05-17 00:30:55.753 [INFO][4112] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:30:55.757215 containerd[1476]: 2025-05-17 00:30:55.755 [INFO][4105] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71" May 17 00:30:55.759612 containerd[1476]: time="2025-05-17T00:30:55.759571509Z" level=info msg="TearDown network for sandbox \"d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71\" successfully" May 17 00:30:55.759612 containerd[1476]: time="2025-05-17T00:30:55.759599609Z" level=info msg="StopPodSandbox for \"d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71\" returns successfully" May 17 00:30:55.759878 kubelet[2529]: E0517 00:30:55.759850 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:30:55.760895 containerd[1476]: time="2025-05-17T00:30:55.760234032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-q7n6z,Uid:eb74581d-78ad-4419-8238-b440c64be7cd,Namespace:kube-system,Attempt:1,}" May 17 00:30:55.761130 systemd[1]: run-netns-cni\x2dc73c3a02\x2dc117\x2d788e\x2d9ee5\x2d0802d0828fad.mount: Deactivated successfully. May 17 00:30:55.851190 systemd-networkd[1399]: calib0c6e6f1a25: Link UP May 17 00:30:55.854734 systemd-networkd[1399]: calib0c6e6f1a25: Gained carrier May 17 00:30:55.869528 containerd[1476]: 2025-05-17 00:30:55.789 [INFO][4118] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:30:55.869528 containerd[1476]: 2025-05-17 00:30:55.796 [INFO][4118] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--0--241-k8s-coredns--7c65d6cfc9--q7n6z-eth0 coredns-7c65d6cfc9- kube-system eb74581d-78ad-4419-8238-b440c64be7cd 972 0 2025-05-17 00:30:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-232-0-241 coredns-7c65d6cfc9-q7n6z eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib0c6e6f1a25 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3bea63c492823ca9f94319928bdb55ece135a13b5fe1fd1191557981da832918" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q7n6z" WorkloadEndpoint="172--232--0--241-k8s-coredns--7c65d6cfc9--q7n6z-" May 17 00:30:55.869528 containerd[1476]: 2025-05-17 00:30:55.796 [INFO][4118] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3bea63c492823ca9f94319928bdb55ece135a13b5fe1fd1191557981da832918" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q7n6z" WorkloadEndpoint="172--232--0--241-k8s-coredns--7c65d6cfc9--q7n6z-eth0" May 17 00:30:55.869528 containerd[1476]: 2025-05-17 00:30:55.817 [INFO][4130] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3bea63c492823ca9f94319928bdb55ece135a13b5fe1fd1191557981da832918" HandleID="k8s-pod-network.3bea63c492823ca9f94319928bdb55ece135a13b5fe1fd1191557981da832918" Workload="172--232--0--241-k8s-coredns--7c65d6cfc9--q7n6z-eth0" May 17 00:30:55.869528 containerd[1476]: 2025-05-17 00:30:55.817 [INFO][4130] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3bea63c492823ca9f94319928bdb55ece135a13b5fe1fd1191557981da832918" HandleID="k8s-pod-network.3bea63c492823ca9f94319928bdb55ece135a13b5fe1fd1191557981da832918" Workload="172--232--0--241-k8s-coredns--7c65d6cfc9--q7n6z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9020), Attrs:map[string]string{"namespace":"kube-system", "node":"172-232-0-241", "pod":"coredns-7c65d6cfc9-q7n6z", "timestamp":"2025-05-17 00:30:55.817840689 +0000 UTC"}, Hostname:"172-232-0-241", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:30:55.869528 containerd[1476]: 2025-05-17 00:30:55.817 [INFO][4130] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:30:55.869528 containerd[1476]: 2025-05-17 00:30:55.818 [INFO][4130] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:30:55.869528 containerd[1476]: 2025-05-17 00:30:55.818 [INFO][4130] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-0-241' May 17 00:30:55.869528 containerd[1476]: 2025-05-17 00:30:55.822 [INFO][4130] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3bea63c492823ca9f94319928bdb55ece135a13b5fe1fd1191557981da832918" host="172-232-0-241" May 17 00:30:55.869528 containerd[1476]: 2025-05-17 00:30:55.826 [INFO][4130] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-0-241" May 17 00:30:55.869528 containerd[1476]: 2025-05-17 00:30:55.831 [INFO][4130] ipam/ipam.go 511: Trying affinity for 192.168.114.128/26 host="172-232-0-241" May 17 00:30:55.869528 containerd[1476]: 2025-05-17 00:30:55.833 [INFO][4130] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.128/26 host="172-232-0-241" May 17 00:30:55.869528 containerd[1476]: 2025-05-17 00:30:55.834 [INFO][4130] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.128/26 host="172-232-0-241" May 17 00:30:55.869528 containerd[1476]: 2025-05-17 00:30:55.834 [INFO][4130] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.114.128/26 handle="k8s-pod-network.3bea63c492823ca9f94319928bdb55ece135a13b5fe1fd1191557981da832918" host="172-232-0-241" May 17 00:30:55.869528 containerd[1476]: 2025-05-17 00:30:55.836 [INFO][4130] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3bea63c492823ca9f94319928bdb55ece135a13b5fe1fd1191557981da832918 May 17 00:30:55.869528 containerd[1476]: 2025-05-17 00:30:55.838 [INFO][4130] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.114.128/26 handle="k8s-pod-network.3bea63c492823ca9f94319928bdb55ece135a13b5fe1fd1191557981da832918" host="172-232-0-241" May 17 00:30:55.869528 containerd[1476]: 2025-05-17 00:30:55.843 [INFO][4130] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.114.130/26] block=192.168.114.128/26 handle="k8s-pod-network.3bea63c492823ca9f94319928bdb55ece135a13b5fe1fd1191557981da832918" host="172-232-0-241" May 17 00:30:55.869528 containerd[1476]: 2025-05-17 00:30:55.843 [INFO][4130] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.130/26] handle="k8s-pod-network.3bea63c492823ca9f94319928bdb55ece135a13b5fe1fd1191557981da832918" host="172-232-0-241" May 17 00:30:55.869528 containerd[1476]: 2025-05-17 00:30:55.843 [INFO][4130] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:30:55.869528 containerd[1476]: 2025-05-17 00:30:55.843 [INFO][4130] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.114.130/26] IPv6=[] ContainerID="3bea63c492823ca9f94319928bdb55ece135a13b5fe1fd1191557981da832918" HandleID="k8s-pod-network.3bea63c492823ca9f94319928bdb55ece135a13b5fe1fd1191557981da832918" Workload="172--232--0--241-k8s-coredns--7c65d6cfc9--q7n6z-eth0" May 17 00:30:55.869973 containerd[1476]: 2025-05-17 00:30:55.847 [INFO][4118] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3bea63c492823ca9f94319928bdb55ece135a13b5fe1fd1191557981da832918" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q7n6z" WorkloadEndpoint="172--232--0--241-k8s-coredns--7c65d6cfc9--q7n6z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--0--241-k8s-coredns--7c65d6cfc9--q7n6z-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"eb74581d-78ad-4419-8238-b440c64be7cd", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 30, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-0-241", ContainerID:"", Pod:"coredns-7c65d6cfc9-q7n6z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib0c6e6f1a25", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:30:55.869973 containerd[1476]: 2025-05-17 00:30:55.847 [INFO][4118] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.130/32] ContainerID="3bea63c492823ca9f94319928bdb55ece135a13b5fe1fd1191557981da832918" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q7n6z" WorkloadEndpoint="172--232--0--241-k8s-coredns--7c65d6cfc9--q7n6z-eth0" May 17 00:30:55.869973 containerd[1476]: 2025-05-17 00:30:55.847 [INFO][4118] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib0c6e6f1a25 ContainerID="3bea63c492823ca9f94319928bdb55ece135a13b5fe1fd1191557981da832918" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q7n6z" WorkloadEndpoint="172--232--0--241-k8s-coredns--7c65d6cfc9--q7n6z-eth0" May 17 00:30:55.869973 containerd[1476]: 2025-05-17 00:30:55.857 [INFO][4118] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3bea63c492823ca9f94319928bdb55ece135a13b5fe1fd1191557981da832918" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q7n6z" WorkloadEndpoint="172--232--0--241-k8s-coredns--7c65d6cfc9--q7n6z-eth0" May 17 00:30:55.869973 containerd[1476]: 2025-05-17 00:30:55.858 [INFO][4118] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3bea63c492823ca9f94319928bdb55ece135a13b5fe1fd1191557981da832918" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q7n6z" WorkloadEndpoint="172--232--0--241-k8s-coredns--7c65d6cfc9--q7n6z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--0--241-k8s-coredns--7c65d6cfc9--q7n6z-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"eb74581d-78ad-4419-8238-b440c64be7cd", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 30, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-0-241", ContainerID:"3bea63c492823ca9f94319928bdb55ece135a13b5fe1fd1191557981da832918", Pod:"coredns-7c65d6cfc9-q7n6z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib0c6e6f1a25", MAC:"26:55:5c:8f:f5:e0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:30:55.869973 containerd[1476]: 2025-05-17 00:30:55.865 [INFO][4118] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3bea63c492823ca9f94319928bdb55ece135a13b5fe1fd1191557981da832918" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q7n6z" WorkloadEndpoint="172--232--0--241-k8s-coredns--7c65d6cfc9--q7n6z-eth0" May 17 00:30:55.885053 containerd[1476]: time="2025-05-17T00:30:55.884781315Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:30:55.885053 containerd[1476]: time="2025-05-17T00:30:55.884842935Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:30:55.885053 containerd[1476]: time="2025-05-17T00:30:55.884854335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:30:55.885053 containerd[1476]: time="2025-05-17T00:30:55.884935546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:30:55.909522 systemd[1]: Started cri-containerd-3bea63c492823ca9f94319928bdb55ece135a13b5fe1fd1191557981da832918.scope - libcontainer container 3bea63c492823ca9f94319928bdb55ece135a13b5fe1fd1191557981da832918. May 17 00:30:55.949631 containerd[1476]: time="2025-05-17T00:30:55.949412511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-q7n6z,Uid:eb74581d-78ad-4419-8238-b440c64be7cd,Namespace:kube-system,Attempt:1,} returns sandbox id \"3bea63c492823ca9f94319928bdb55ece135a13b5fe1fd1191557981da832918\"" May 17 00:30:55.950306 kubelet[2529]: E0517 00:30:55.950253 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:30:55.953829 containerd[1476]: time="2025-05-17T00:30:55.953639549Z" level=info msg="CreateContainer within sandbox \"3bea63c492823ca9f94319928bdb55ece135a13b5fe1fd1191557981da832918\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:30:55.961649 containerd[1476]: time="2025-05-17T00:30:55.961630741Z" level=info msg="CreateContainer within sandbox \"3bea63c492823ca9f94319928bdb55ece135a13b5fe1fd1191557981da832918\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b321fc52461d86119d9b6fc720fca4031bd21b7e6df85f2e5319bc2385c27990\"" May 17 00:30:55.962171 containerd[1476]: time="2025-05-17T00:30:55.962125294Z" level=info msg="StartContainer for \"b321fc52461d86119d9b6fc720fca4031bd21b7e6df85f2e5319bc2385c27990\"" May 17 00:30:55.982540 systemd[1]: Started cri-containerd-b321fc52461d86119d9b6fc720fca4031bd21b7e6df85f2e5319bc2385c27990.scope - libcontainer container b321fc52461d86119d9b6fc720fca4031bd21b7e6df85f2e5319bc2385c27990. May 17 00:30:56.003556 containerd[1476]: time="2025-05-17T00:30:56.003522023Z" level=info msg="StartContainer for \"b321fc52461d86119d9b6fc720fca4031bd21b7e6df85f2e5319bc2385c27990\" returns successfully" May 17 00:30:56.693809 containerd[1476]: time="2025-05-17T00:30:56.693265456Z" level=info msg="StopPodSandbox for \"ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe\"" May 17 00:30:56.760776 systemd[1]: run-containerd-runc-k8s.io-3bea63c492823ca9f94319928bdb55ece135a13b5fe1fd1191557981da832918-runc.lDvPaE.mount: Deactivated successfully. May 17 00:30:56.776354 containerd[1476]: 2025-05-17 00:30:56.735 [INFO][4253] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe" May 17 00:30:56.776354 containerd[1476]: 2025-05-17 00:30:56.736 [INFO][4253] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe" iface="eth0" netns="/var/run/netns/cni-96256b84-41c6-7f64-06bb-30071e8c16bb" May 17 00:30:56.776354 containerd[1476]: 2025-05-17 00:30:56.736 [INFO][4253] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe" iface="eth0" netns="/var/run/netns/cni-96256b84-41c6-7f64-06bb-30071e8c16bb" May 17 00:30:56.776354 containerd[1476]: 2025-05-17 00:30:56.737 [INFO][4253] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe" iface="eth0" netns="/var/run/netns/cni-96256b84-41c6-7f64-06bb-30071e8c16bb" May 17 00:30:56.776354 containerd[1476]: 2025-05-17 00:30:56.737 [INFO][4253] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe" May 17 00:30:56.776354 containerd[1476]: 2025-05-17 00:30:56.737 [INFO][4253] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe" May 17 00:30:56.776354 containerd[1476]: 2025-05-17 00:30:56.762 [INFO][4261] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe" HandleID="k8s-pod-network.ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe" Workload="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--chqjq-eth0" May 17 00:30:56.776354 containerd[1476]: 2025-05-17 00:30:56.763 [INFO][4261] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:30:56.776354 containerd[1476]: 2025-05-17 00:30:56.763 [INFO][4261] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:30:56.776354 containerd[1476]: 2025-05-17 00:30:56.768 [WARNING][4261] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe" HandleID="k8s-pod-network.ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe" Workload="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--chqjq-eth0" May 17 00:30:56.776354 containerd[1476]: 2025-05-17 00:30:56.768 [INFO][4261] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe" HandleID="k8s-pod-network.ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe" Workload="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--chqjq-eth0" May 17 00:30:56.776354 containerd[1476]: 2025-05-17 00:30:56.769 [INFO][4261] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:30:56.776354 containerd[1476]: 2025-05-17 00:30:56.772 [INFO][4253] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe" May 17 00:30:56.776227 systemd[1]: run-netns-cni\x2d96256b84\x2d41c6\x2d7f64\x2d06bb\x2d30071e8c16bb.mount: Deactivated successfully. May 17 00:30:56.776983 containerd[1476]: time="2025-05-17T00:30:56.776939460Z" level=info msg="TearDown network for sandbox \"ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe\" successfully" May 17 00:30:56.776983 containerd[1476]: time="2025-05-17T00:30:56.776966120Z" level=info msg="StopPodSandbox for \"ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe\" returns successfully" May 17 00:30:56.778224 containerd[1476]: time="2025-05-17T00:30:56.778192874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cf648ccbb-chqjq,Uid:16a8edb6-95df-4bc2-a130-7cc52db94763,Namespace:calico-apiserver,Attempt:1,}" May 17 00:30:56.864273 kubelet[2529]: E0517 00:30:56.864107 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:30:56.893723 kubelet[2529]: I0517 00:30:56.892901 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-q7n6z" podStartSLOduration=28.892883507 podStartE2EDuration="28.892883507s" podCreationTimestamp="2025-05-17 00:30:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:30:56.880465079 +0000 UTC m=+34.280954900" watchObservedRunningTime="2025-05-17 00:30:56.892883507 +0000 UTC m=+34.293373308" May 17 00:30:56.901387 systemd-networkd[1399]: calid28495967b6: Link UP May 17 00:30:56.902572 systemd-networkd[1399]: calid28495967b6: Gained carrier May 17 00:30:56.926322 containerd[1476]: 2025-05-17 00:30:56.809 [INFO][4268] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:30:56.926322 containerd[1476]: 2025-05-17 00:30:56.818 [INFO][4268] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--0--241-k8s-calico--apiserver--7cf648ccbb--chqjq-eth0 calico-apiserver-7cf648ccbb- calico-apiserver 16a8edb6-95df-4bc2-a130-7cc52db94763 984 0 2025-05-17 00:30:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7cf648ccbb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-232-0-241 calico-apiserver-7cf648ccbb-chqjq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid28495967b6 [] [] }} ContainerID="65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d" Namespace="calico-apiserver" Pod="calico-apiserver-7cf648ccbb-chqjq" WorkloadEndpoint="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--chqjq-" May 17 00:30:56.926322 containerd[1476]: 2025-05-17 00:30:56.818 [INFO][4268] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d" Namespace="calico-apiserver" Pod="calico-apiserver-7cf648ccbb-chqjq" WorkloadEndpoint="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--chqjq-eth0" May 17 00:30:56.926322 containerd[1476]: 2025-05-17 00:30:56.849 [INFO][4280] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d" HandleID="k8s-pod-network.65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d" Workload="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--chqjq-eth0" May 17 00:30:56.926322 containerd[1476]: 2025-05-17 00:30:56.849 [INFO][4280] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d" HandleID="k8s-pod-network.65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d" Workload="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--chqjq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000235890), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-232-0-241", "pod":"calico-apiserver-7cf648ccbb-chqjq", "timestamp":"2025-05-17 00:30:56.84966443 +0000 UTC"}, Hostname:"172-232-0-241", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:30:56.926322 containerd[1476]: 2025-05-17 00:30:56.850 [INFO][4280] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:30:56.926322 containerd[1476]: 2025-05-17 00:30:56.850 [INFO][4280] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:30:56.926322 containerd[1476]: 2025-05-17 00:30:56.850 [INFO][4280] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-0-241' May 17 00:30:56.926322 containerd[1476]: 2025-05-17 00:30:56.855 [INFO][4280] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d" host="172-232-0-241" May 17 00:30:56.926322 containerd[1476]: 2025-05-17 00:30:56.862 [INFO][4280] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-0-241" May 17 00:30:56.926322 containerd[1476]: 2025-05-17 00:30:56.869 [INFO][4280] ipam/ipam.go 511: Trying affinity for 192.168.114.128/26 host="172-232-0-241" May 17 00:30:56.926322 containerd[1476]: 2025-05-17 00:30:56.871 [INFO][4280] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.128/26 host="172-232-0-241" May 17 00:30:56.926322 containerd[1476]: 2025-05-17 00:30:56.874 [INFO][4280] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.128/26 host="172-232-0-241" May 17 00:30:56.926322 containerd[1476]: 2025-05-17 00:30:56.874 [INFO][4280] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.114.128/26 handle="k8s-pod-network.65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d" host="172-232-0-241" May 17 00:30:56.926322 containerd[1476]: 2025-05-17 00:30:56.876 [INFO][4280] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d May 17 00:30:56.926322 containerd[1476]: 2025-05-17 00:30:56.883 [INFO][4280] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.114.128/26 handle="k8s-pod-network.65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d" host="172-232-0-241" May 17 00:30:56.926322 containerd[1476]: 2025-05-17 00:30:56.889 [INFO][4280] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.114.131/26] block=192.168.114.128/26 handle="k8s-pod-network.65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d" host="172-232-0-241" May 17 00:30:56.926322 containerd[1476]: 2025-05-17 00:30:56.889 [INFO][4280] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.131/26] handle="k8s-pod-network.65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d" host="172-232-0-241" May 17 00:30:56.926322 containerd[1476]: 2025-05-17 00:30:56.889 [INFO][4280] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:30:56.926322 containerd[1476]: 2025-05-17 00:30:56.889 [INFO][4280] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.114.131/26] IPv6=[] ContainerID="65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d" HandleID="k8s-pod-network.65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d" Workload="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--chqjq-eth0" May 17 00:30:56.926862 containerd[1476]: 2025-05-17 00:30:56.894 [INFO][4268] cni-plugin/k8s.go 418: Populated endpoint ContainerID="65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d" Namespace="calico-apiserver" Pod="calico-apiserver-7cf648ccbb-chqjq" WorkloadEndpoint="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--chqjq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--0--241-k8s-calico--apiserver--7cf648ccbb--chqjq-eth0", GenerateName:"calico-apiserver-7cf648ccbb-", Namespace:"calico-apiserver", SelfLink:"", UID:"16a8edb6-95df-4bc2-a130-7cc52db94763", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 30, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cf648ccbb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-0-241", ContainerID:"", Pod:"calico-apiserver-7cf648ccbb-chqjq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid28495967b6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:30:56.926862 containerd[1476]: 2025-05-17 00:30:56.895 [INFO][4268] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.131/32] ContainerID="65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d" Namespace="calico-apiserver" Pod="calico-apiserver-7cf648ccbb-chqjq" WorkloadEndpoint="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--chqjq-eth0" May 17 00:30:56.926862 containerd[1476]: 2025-05-17 00:30:56.895 [INFO][4268] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid28495967b6 ContainerID="65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d" Namespace="calico-apiserver" Pod="calico-apiserver-7cf648ccbb-chqjq" WorkloadEndpoint="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--chqjq-eth0" May 17 00:30:56.926862 containerd[1476]: 2025-05-17 00:30:56.903 [INFO][4268] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d" Namespace="calico-apiserver" Pod="calico-apiserver-7cf648ccbb-chqjq" WorkloadEndpoint="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--chqjq-eth0" May 17 00:30:56.926862 containerd[1476]: 2025-05-17 00:30:56.904 [INFO][4268] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d" Namespace="calico-apiserver" Pod="calico-apiserver-7cf648ccbb-chqjq" WorkloadEndpoint="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--chqjq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--0--241-k8s-calico--apiserver--7cf648ccbb--chqjq-eth0", GenerateName:"calico-apiserver-7cf648ccbb-", Namespace:"calico-apiserver", SelfLink:"", UID:"16a8edb6-95df-4bc2-a130-7cc52db94763", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 30, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cf648ccbb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-0-241", ContainerID:"65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d", Pod:"calico-apiserver-7cf648ccbb-chqjq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid28495967b6", MAC:"4e:64:19:23:28:f9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:30:56.926862 containerd[1476]: 2025-05-17 00:30:56.919 [INFO][4268] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d" Namespace="calico-apiserver" Pod="calico-apiserver-7cf648ccbb-chqjq" WorkloadEndpoint="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--chqjq-eth0" May 17 00:30:56.957570 containerd[1476]: time="2025-05-17T00:30:56.956238872Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:30:56.957570 containerd[1476]: time="2025-05-17T00:30:56.957023965Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:30:56.957570 containerd[1476]: time="2025-05-17T00:30:56.957055625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:30:56.957570 containerd[1476]: time="2025-05-17T00:30:56.957203976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:30:56.988556 systemd[1]: Started cri-containerd-65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d.scope - libcontainer container 65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d. May 17 00:30:57.024176 containerd[1476]: time="2025-05-17T00:30:57.024118748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cf648ccbb-chqjq,Uid:16a8edb6-95df-4bc2-a130-7cc52db94763,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d\"" May 17 00:30:57.030040 containerd[1476]: time="2025-05-17T00:30:57.030005880Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 00:30:57.313588 systemd-networkd[1399]: calib0c6e6f1a25: Gained IPv6LL May 17 00:30:57.693401 containerd[1476]: time="2025-05-17T00:30:57.693364380Z" level=info msg="StopPodSandbox for \"e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef\"" May 17 00:30:57.693958 containerd[1476]: time="2025-05-17T00:30:57.693931512Z" level=info msg="StopPodSandbox for \"07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0\"" May 17 00:30:57.696746 containerd[1476]: time="2025-05-17T00:30:57.696373921Z" level=info msg="StopPodSandbox for \"436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a\"" May 17 00:30:57.763951 systemd[1]: run-containerd-runc-k8s.io-65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d-runc.9dyVFa.mount: Deactivated successfully. May 17 00:30:57.799112 containerd[1476]: 2025-05-17 00:30:57.742 [INFO][4388] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef" May 17 00:30:57.799112 containerd[1476]: 2025-05-17 00:30:57.742 [INFO][4388] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef" iface="eth0" netns="/var/run/netns/cni-fe9a85de-0f83-dd55-5fc8-bc5f0882518e" May 17 00:30:57.799112 containerd[1476]: 2025-05-17 00:30:57.743 [INFO][4388] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef" iface="eth0" netns="/var/run/netns/cni-fe9a85de-0f83-dd55-5fc8-bc5f0882518e" May 17 00:30:57.799112 containerd[1476]: 2025-05-17 00:30:57.743 [INFO][4388] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef" iface="eth0" netns="/var/run/netns/cni-fe9a85de-0f83-dd55-5fc8-bc5f0882518e" May 17 00:30:57.799112 containerd[1476]: 2025-05-17 00:30:57.743 [INFO][4388] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef" May 17 00:30:57.799112 containerd[1476]: 2025-05-17 00:30:57.743 [INFO][4388] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef" May 17 00:30:57.799112 containerd[1476]: 2025-05-17 00:30:57.783 [INFO][4404] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef" HandleID="k8s-pod-network.e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef" Workload="172--232--0--241-k8s-goldmane--8f77d7b6c--s52mw-eth0" May 17 00:30:57.799112 containerd[1476]: 2025-05-17 00:30:57.785 [INFO][4404] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:30:57.799112 containerd[1476]: 2025-05-17 00:30:57.785 [INFO][4404] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:30:57.799112 containerd[1476]: 2025-05-17 00:30:57.794 [WARNING][4404] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef" HandleID="k8s-pod-network.e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef" Workload="172--232--0--241-k8s-goldmane--8f77d7b6c--s52mw-eth0" May 17 00:30:57.799112 containerd[1476]: 2025-05-17 00:30:57.794 [INFO][4404] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef" HandleID="k8s-pod-network.e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef" Workload="172--232--0--241-k8s-goldmane--8f77d7b6c--s52mw-eth0" May 17 00:30:57.799112 containerd[1476]: 2025-05-17 00:30:57.795 [INFO][4404] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:30:57.799112 containerd[1476]: 2025-05-17 00:30:57.797 [INFO][4388] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef" May 17 00:30:57.805492 containerd[1476]: time="2025-05-17T00:30:57.802470085Z" level=info msg="TearDown network for sandbox \"e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef\" successfully" May 17 00:30:57.805492 containerd[1476]: time="2025-05-17T00:30:57.802499285Z" level=info msg="StopPodSandbox for \"e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef\" returns successfully" May 17 00:30:57.806979 systemd[1]: run-netns-cni\x2dfe9a85de\x2d0f83\x2ddd55\x2d5fc8\x2dbc5f0882518e.mount: Deactivated successfully. May 17 00:30:57.807467 containerd[1476]: time="2025-05-17T00:30:57.807386273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-s52mw,Uid:ee80876b-aa39-4375-a4e1-fd4e85f8d3ee,Namespace:calico-system,Attempt:1,}" May 17 00:30:57.848684 containerd[1476]: 2025-05-17 00:30:57.767 [INFO][4384] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0" May 17 00:30:57.848684 containerd[1476]: 2025-05-17 00:30:57.768 [INFO][4384] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0" iface="eth0" netns="/var/run/netns/cni-04dffc28-e113-f6fa-d30f-bc236a9ace0f" May 17 00:30:57.848684 containerd[1476]: 2025-05-17 00:30:57.768 [INFO][4384] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0" iface="eth0" netns="/var/run/netns/cni-04dffc28-e113-f6fa-d30f-bc236a9ace0f" May 17 00:30:57.848684 containerd[1476]: 2025-05-17 00:30:57.768 [INFO][4384] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0" iface="eth0" netns="/var/run/netns/cni-04dffc28-e113-f6fa-d30f-bc236a9ace0f" May 17 00:30:57.848684 containerd[1476]: 2025-05-17 00:30:57.768 [INFO][4384] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0" May 17 00:30:57.848684 containerd[1476]: 2025-05-17 00:30:57.768 [INFO][4384] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0" May 17 00:30:57.848684 containerd[1476]: 2025-05-17 00:30:57.824 [INFO][4410] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0" HandleID="k8s-pod-network.07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0" Workload="172--232--0--241-k8s-calico--apiserver--59c6b49969--lmb87-eth0" May 17 00:30:57.848684 containerd[1476]: 2025-05-17 00:30:57.825 [INFO][4410] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:30:57.848684 containerd[1476]: 2025-05-17 00:30:57.825 [INFO][4410] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:30:57.848684 containerd[1476]: 2025-05-17 00:30:57.834 [WARNING][4410] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0" HandleID="k8s-pod-network.07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0" Workload="172--232--0--241-k8s-calico--apiserver--59c6b49969--lmb87-eth0" May 17 00:30:57.848684 containerd[1476]: 2025-05-17 00:30:57.834 [INFO][4410] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0" HandleID="k8s-pod-network.07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0" Workload="172--232--0--241-k8s-calico--apiserver--59c6b49969--lmb87-eth0" May 17 00:30:57.848684 containerd[1476]: 2025-05-17 00:30:57.837 [INFO][4410] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:30:57.848684 containerd[1476]: 2025-05-17 00:30:57.842 [INFO][4384] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0" May 17 00:30:57.851545 containerd[1476]: time="2025-05-17T00:30:57.849587936Z" level=info msg="TearDown network for sandbox \"07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0\" successfully" May 17 00:30:57.851545 containerd[1476]: time="2025-05-17T00:30:57.849614106Z" level=info msg="StopPodSandbox for \"07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0\" returns successfully" May 17 00:30:57.852074 containerd[1476]: time="2025-05-17T00:30:57.852056715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59c6b49969-lmb87,Uid:dd5a12e6-0476-4ee4-9663-5e2d40e20810,Namespace:calico-apiserver,Attempt:1,}" May 17 00:30:57.853282 systemd[1]: run-netns-cni\x2d04dffc28\x2de113\x2df6fa\x2dd30f\x2dbc236a9ace0f.mount: Deactivated successfully. May 17 00:30:57.855801 containerd[1476]: 2025-05-17 00:30:57.783 [INFO][4392] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a" May 17 00:30:57.855801 containerd[1476]: 2025-05-17 00:30:57.785 [INFO][4392] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a" iface="eth0" netns="/var/run/netns/cni-468d9140-2d51-f1c5-5291-fdd7b79b3e76" May 17 00:30:57.855801 containerd[1476]: 2025-05-17 00:30:57.785 [INFO][4392] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a" iface="eth0" netns="/var/run/netns/cni-468d9140-2d51-f1c5-5291-fdd7b79b3e76" May 17 00:30:57.855801 containerd[1476]: 2025-05-17 00:30:57.785 [INFO][4392] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a" iface="eth0" netns="/var/run/netns/cni-468d9140-2d51-f1c5-5291-fdd7b79b3e76" May 17 00:30:57.855801 containerd[1476]: 2025-05-17 00:30:57.785 [INFO][4392] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a" May 17 00:30:57.855801 containerd[1476]: 2025-05-17 00:30:57.785 [INFO][4392] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a" May 17 00:30:57.855801 containerd[1476]: 2025-05-17 00:30:57.834 [INFO][4418] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a" HandleID="k8s-pod-network.436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a" Workload="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--wj8jt-eth0" May 17 00:30:57.855801 containerd[1476]: 2025-05-17 00:30:57.834 [INFO][4418] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:30:57.855801 containerd[1476]: 2025-05-17 00:30:57.838 [INFO][4418] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:30:57.855801 containerd[1476]: 2025-05-17 00:30:57.844 [WARNING][4418] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a" HandleID="k8s-pod-network.436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a" Workload="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--wj8jt-eth0" May 17 00:30:57.855801 containerd[1476]: 2025-05-17 00:30:57.844 [INFO][4418] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a" HandleID="k8s-pod-network.436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a" Workload="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--wj8jt-eth0" May 17 00:30:57.855801 containerd[1476]: 2025-05-17 00:30:57.846 [INFO][4418] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:30:57.855801 containerd[1476]: 2025-05-17 00:30:57.852 [INFO][4392] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a" May 17 00:30:57.856742 containerd[1476]: time="2025-05-17T00:30:57.856479211Z" level=info msg="TearDown network for sandbox \"436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a\" successfully" May 17 00:30:57.856742 containerd[1476]: time="2025-05-17T00:30:57.856531371Z" level=info msg="StopPodSandbox for \"436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a\" returns successfully" May 17 00:30:57.857292 containerd[1476]: time="2025-05-17T00:30:57.857183923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cf648ccbb-wj8jt,Uid:7e0aafb5-c219-4523-9e5b-1fe312a4aa2d,Namespace:calico-apiserver,Attempt:1,}" May 17 00:30:57.868360 kubelet[2529]: E0517 00:30:57.868064 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:30:57.981172 systemd-networkd[1399]: cali4dbc812d449: Link UP May 17 00:30:57.983278 systemd-networkd[1399]: cali4dbc812d449: Gained carrier May 17 00:30:57.997701 containerd[1476]: 2025-05-17 00:30:57.891 [INFO][4438] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:30:57.997701 containerd[1476]: 2025-05-17 00:30:57.901 [INFO][4438] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--0--241-k8s-calico--apiserver--59c6b49969--lmb87-eth0 calico-apiserver-59c6b49969- calico-apiserver dd5a12e6-0476-4ee4-9663-5e2d40e20810 1003 0 2025-05-17 00:30:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59c6b49969 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-232-0-241 calico-apiserver-59c6b49969-lmb87 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4dbc812d449 [] [] }} ContainerID="dca13b1dac7873ae1c93b71a9e6942a1a9ad75d63588c0df6457ea5e166425b6" Namespace="calico-apiserver" Pod="calico-apiserver-59c6b49969-lmb87" WorkloadEndpoint="172--232--0--241-k8s-calico--apiserver--59c6b49969--lmb87-" May 17 00:30:57.997701 containerd[1476]: 2025-05-17 00:30:57.901 [INFO][4438] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dca13b1dac7873ae1c93b71a9e6942a1a9ad75d63588c0df6457ea5e166425b6" Namespace="calico-apiserver" Pod="calico-apiserver-59c6b49969-lmb87" WorkloadEndpoint="172--232--0--241-k8s-calico--apiserver--59c6b49969--lmb87-eth0" May 17 00:30:57.997701 containerd[1476]: 2025-05-17 00:30:57.934 [INFO][4468] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dca13b1dac7873ae1c93b71a9e6942a1a9ad75d63588c0df6457ea5e166425b6" HandleID="k8s-pod-network.dca13b1dac7873ae1c93b71a9e6942a1a9ad75d63588c0df6457ea5e166425b6" Workload="172--232--0--241-k8s-calico--apiserver--59c6b49969--lmb87-eth0" May 17 00:30:57.997701 containerd[1476]: 2025-05-17 00:30:57.935 [INFO][4468] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dca13b1dac7873ae1c93b71a9e6942a1a9ad75d63588c0df6457ea5e166425b6" HandleID="k8s-pod-network.dca13b1dac7873ae1c93b71a9e6942a1a9ad75d63588c0df6457ea5e166425b6" Workload="172--232--0--241-k8s-calico--apiserver--59c6b49969--lmb87-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000235020), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-232-0-241", "pod":"calico-apiserver-59c6b49969-lmb87", "timestamp":"2025-05-17 00:30:57.934703494 +0000 UTC"}, Hostname:"172-232-0-241", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:30:57.997701 containerd[1476]: 2025-05-17 00:30:57.935 [INFO][4468] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:30:57.997701 containerd[1476]: 2025-05-17 00:30:57.935 [INFO][4468] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:30:57.997701 containerd[1476]: 2025-05-17 00:30:57.935 [INFO][4468] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-0-241' May 17 00:30:57.997701 containerd[1476]: 2025-05-17 00:30:57.940 [INFO][4468] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dca13b1dac7873ae1c93b71a9e6942a1a9ad75d63588c0df6457ea5e166425b6" host="172-232-0-241" May 17 00:30:57.997701 containerd[1476]: 2025-05-17 00:30:57.948 [INFO][4468] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-0-241" May 17 00:30:57.997701 containerd[1476]: 2025-05-17 00:30:57.958 [INFO][4468] ipam/ipam.go 511: Trying affinity for 192.168.114.128/26 host="172-232-0-241" May 17 00:30:57.997701 containerd[1476]: 2025-05-17 00:30:57.962 [INFO][4468] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.128/26 host="172-232-0-241" May 17 00:30:57.997701 containerd[1476]: 2025-05-17 00:30:57.963 [INFO][4468] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.128/26 host="172-232-0-241" May 17 00:30:57.997701 containerd[1476]: 2025-05-17 00:30:57.963 [INFO][4468] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.114.128/26 handle="k8s-pod-network.dca13b1dac7873ae1c93b71a9e6942a1a9ad75d63588c0df6457ea5e166425b6" host="172-232-0-241" May 17 00:30:57.997701 containerd[1476]: 2025-05-17 00:30:57.964 [INFO][4468] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dca13b1dac7873ae1c93b71a9e6942a1a9ad75d63588c0df6457ea5e166425b6 May 17 00:30:57.997701 containerd[1476]: 2025-05-17 00:30:57.967 [INFO][4468] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.114.128/26 handle="k8s-pod-network.dca13b1dac7873ae1c93b71a9e6942a1a9ad75d63588c0df6457ea5e166425b6" host="172-232-0-241" May 17 00:30:57.997701 containerd[1476]: 2025-05-17 00:30:57.972 [INFO][4468] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.114.132/26] block=192.168.114.128/26 handle="k8s-pod-network.dca13b1dac7873ae1c93b71a9e6942a1a9ad75d63588c0df6457ea5e166425b6" host="172-232-0-241" May 17 00:30:57.997701 containerd[1476]: 2025-05-17 00:30:57.972 [INFO][4468] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.132/26] handle="k8s-pod-network.dca13b1dac7873ae1c93b71a9e6942a1a9ad75d63588c0df6457ea5e166425b6" host="172-232-0-241" May 17 00:30:57.997701 containerd[1476]: 2025-05-17 00:30:57.972 [INFO][4468] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:30:57.997701 containerd[1476]: 2025-05-17 00:30:57.972 [INFO][4468] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.114.132/26] IPv6=[] ContainerID="dca13b1dac7873ae1c93b71a9e6942a1a9ad75d63588c0df6457ea5e166425b6" HandleID="k8s-pod-network.dca13b1dac7873ae1c93b71a9e6942a1a9ad75d63588c0df6457ea5e166425b6" Workload="172--232--0--241-k8s-calico--apiserver--59c6b49969--lmb87-eth0" May 17 00:30:57.998225 containerd[1476]: 2025-05-17 00:30:57.974 [INFO][4438] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dca13b1dac7873ae1c93b71a9e6942a1a9ad75d63588c0df6457ea5e166425b6" Namespace="calico-apiserver" Pod="calico-apiserver-59c6b49969-lmb87" WorkloadEndpoint="172--232--0--241-k8s-calico--apiserver--59c6b49969--lmb87-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--0--241-k8s-calico--apiserver--59c6b49969--lmb87-eth0", GenerateName:"calico-apiserver-59c6b49969-", Namespace:"calico-apiserver", SelfLink:"", UID:"dd5a12e6-0476-4ee4-9663-5e2d40e20810", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 30, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59c6b49969", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-0-241", ContainerID:"", Pod:"calico-apiserver-59c6b49969-lmb87", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4dbc812d449", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:30:57.998225 containerd[1476]: 2025-05-17 00:30:57.974 [INFO][4438] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.132/32] ContainerID="dca13b1dac7873ae1c93b71a9e6942a1a9ad75d63588c0df6457ea5e166425b6" Namespace="calico-apiserver" Pod="calico-apiserver-59c6b49969-lmb87" WorkloadEndpoint="172--232--0--241-k8s-calico--apiserver--59c6b49969--lmb87-eth0" May 17 00:30:57.998225 containerd[1476]: 2025-05-17 00:30:57.975 [INFO][4438] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4dbc812d449 ContainerID="dca13b1dac7873ae1c93b71a9e6942a1a9ad75d63588c0df6457ea5e166425b6" Namespace="calico-apiserver" Pod="calico-apiserver-59c6b49969-lmb87" WorkloadEndpoint="172--232--0--241-k8s-calico--apiserver--59c6b49969--lmb87-eth0" May 17 00:30:57.998225 containerd[1476]: 2025-05-17 00:30:57.977 [INFO][4438] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dca13b1dac7873ae1c93b71a9e6942a1a9ad75d63588c0df6457ea5e166425b6" Namespace="calico-apiserver" Pod="calico-apiserver-59c6b49969-lmb87" WorkloadEndpoint="172--232--0--241-k8s-calico--apiserver--59c6b49969--lmb87-eth0" May 17 00:30:57.998225 containerd[1476]: 2025-05-17 00:30:57.977 [INFO][4438] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dca13b1dac7873ae1c93b71a9e6942a1a9ad75d63588c0df6457ea5e166425b6" Namespace="calico-apiserver" Pod="calico-apiserver-59c6b49969-lmb87" WorkloadEndpoint="172--232--0--241-k8s-calico--apiserver--59c6b49969--lmb87-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--0--241-k8s-calico--apiserver--59c6b49969--lmb87-eth0", GenerateName:"calico-apiserver-59c6b49969-", Namespace:"calico-apiserver", SelfLink:"", UID:"dd5a12e6-0476-4ee4-9663-5e2d40e20810", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 30, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59c6b49969", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-0-241", ContainerID:"dca13b1dac7873ae1c93b71a9e6942a1a9ad75d63588c0df6457ea5e166425b6", Pod:"calico-apiserver-59c6b49969-lmb87", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4dbc812d449", MAC:"1e:0e:95:8e:c4:bb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:30:57.998225 containerd[1476]: 2025-05-17 00:30:57.994 [INFO][4438] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dca13b1dac7873ae1c93b71a9e6942a1a9ad75d63588c0df6457ea5e166425b6" Namespace="calico-apiserver" Pod="calico-apiserver-59c6b49969-lmb87" WorkloadEndpoint="172--232--0--241-k8s-calico--apiserver--59c6b49969--lmb87-eth0" May 17 00:30:58.019038 containerd[1476]: time="2025-05-17T00:30:58.018686654Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:30:58.019038 containerd[1476]: time="2025-05-17T00:30:58.018724514Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:30:58.019038 containerd[1476]: time="2025-05-17T00:30:58.018735954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:30:58.019038 containerd[1476]: time="2025-05-17T00:30:58.018824484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:30:58.036628 systemd[1]: Started cri-containerd-dca13b1dac7873ae1c93b71a9e6942a1a9ad75d63588c0df6457ea5e166425b6.scope - libcontainer container dca13b1dac7873ae1c93b71a9e6942a1a9ad75d63588c0df6457ea5e166425b6. May 17 00:30:58.079510 containerd[1476]: time="2025-05-17T00:30:58.079165159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59c6b49969-lmb87,Uid:dd5a12e6-0476-4ee4-9663-5e2d40e20810,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"dca13b1dac7873ae1c93b71a9e6942a1a9ad75d63588c0df6457ea5e166425b6\"" May 17 00:30:58.085098 systemd-networkd[1399]: cali5e2011c45fb: Link UP May 17 00:30:58.087070 systemd-networkd[1399]: cali5e2011c45fb: Gained carrier May 17 00:30:58.101856 containerd[1476]: 2025-05-17 00:30:57.859 [INFO][4426] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:30:58.101856 containerd[1476]: 2025-05-17 00:30:57.879 [INFO][4426] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--0--241-k8s-goldmane--8f77d7b6c--s52mw-eth0 goldmane-8f77d7b6c- calico-system ee80876b-aa39-4375-a4e1-fd4e85f8d3ee 1002 0 2025-05-17 00:30:38 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:8f77d7b6c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 172-232-0-241 goldmane-8f77d7b6c-s52mw eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali5e2011c45fb [] [] }} ContainerID="3d4ba11f90b82c0929e6bb6830774f0624a3c92876f6ebe4619cf78401685235" Namespace="calico-system" Pod="goldmane-8f77d7b6c-s52mw" WorkloadEndpoint="172--232--0--241-k8s-goldmane--8f77d7b6c--s52mw-" May 17 00:30:58.101856 containerd[1476]: 2025-05-17 00:30:57.879 [INFO][4426] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3d4ba11f90b82c0929e6bb6830774f0624a3c92876f6ebe4619cf78401685235" Namespace="calico-system" Pod="goldmane-8f77d7b6c-s52mw" WorkloadEndpoint="172--232--0--241-k8s-goldmane--8f77d7b6c--s52mw-eth0" May 17 00:30:58.101856 containerd[1476]: 2025-05-17 00:30:57.945 [INFO][4460] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3d4ba11f90b82c0929e6bb6830774f0624a3c92876f6ebe4619cf78401685235" HandleID="k8s-pod-network.3d4ba11f90b82c0929e6bb6830774f0624a3c92876f6ebe4619cf78401685235" Workload="172--232--0--241-k8s-goldmane--8f77d7b6c--s52mw-eth0" May 17 00:30:58.101856 containerd[1476]: 2025-05-17 00:30:57.947 [INFO][4460] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3d4ba11f90b82c0929e6bb6830774f0624a3c92876f6ebe4619cf78401685235" HandleID="k8s-pod-network.3d4ba11f90b82c0929e6bb6830774f0624a3c92876f6ebe4619cf78401685235" Workload="172--232--0--241-k8s-goldmane--8f77d7b6c--s52mw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9870), Attrs:map[string]string{"namespace":"calico-system", "node":"172-232-0-241", "pod":"goldmane-8f77d7b6c-s52mw", "timestamp":"2025-05-17 00:30:57.945980565 +0000 UTC"}, Hostname:"172-232-0-241", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:30:58.101856 containerd[1476]: 2025-05-17 00:30:57.947 [INFO][4460] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:30:58.101856 containerd[1476]: 2025-05-17 00:30:57.973 [INFO][4460] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:30:58.101856 containerd[1476]: 2025-05-17 00:30:57.973 [INFO][4460] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-0-241' May 17 00:30:58.101856 containerd[1476]: 2025-05-17 00:30:58.042 [INFO][4460] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3d4ba11f90b82c0929e6bb6830774f0624a3c92876f6ebe4619cf78401685235" host="172-232-0-241" May 17 00:30:58.101856 containerd[1476]: 2025-05-17 00:30:58.049 [INFO][4460] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-0-241" May 17 00:30:58.101856 containerd[1476]: 2025-05-17 00:30:58.056 [INFO][4460] ipam/ipam.go 511: Trying affinity for 192.168.114.128/26 host="172-232-0-241" May 17 00:30:58.101856 containerd[1476]: 2025-05-17 00:30:58.058 [INFO][4460] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.128/26 host="172-232-0-241" May 17 00:30:58.101856 containerd[1476]: 2025-05-17 00:30:58.061 [INFO][4460] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.128/26 host="172-232-0-241" May 17 00:30:58.101856 containerd[1476]: 2025-05-17 00:30:58.061 [INFO][4460] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.114.128/26 handle="k8s-pod-network.3d4ba11f90b82c0929e6bb6830774f0624a3c92876f6ebe4619cf78401685235" host="172-232-0-241" May 17 00:30:58.101856 containerd[1476]: 2025-05-17 00:30:58.062 [INFO][4460] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3d4ba11f90b82c0929e6bb6830774f0624a3c92876f6ebe4619cf78401685235 May 17 00:30:58.101856 containerd[1476]: 2025-05-17 00:30:58.066 [INFO][4460] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.114.128/26 handle="k8s-pod-network.3d4ba11f90b82c0929e6bb6830774f0624a3c92876f6ebe4619cf78401685235" host="172-232-0-241" May 17 00:30:58.101856 containerd[1476]: 2025-05-17 00:30:58.072 [INFO][4460] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.114.133/26] block=192.168.114.128/26 handle="k8s-pod-network.3d4ba11f90b82c0929e6bb6830774f0624a3c92876f6ebe4619cf78401685235" host="172-232-0-241" May 17 00:30:58.101856 containerd[1476]: 2025-05-17 00:30:58.072 [INFO][4460] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.133/26] handle="k8s-pod-network.3d4ba11f90b82c0929e6bb6830774f0624a3c92876f6ebe4619cf78401685235" host="172-232-0-241" May 17 00:30:58.101856 containerd[1476]: 2025-05-17 00:30:58.072 [INFO][4460] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:30:58.101856 containerd[1476]: 2025-05-17 00:30:58.072 [INFO][4460] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.114.133/26] IPv6=[] ContainerID="3d4ba11f90b82c0929e6bb6830774f0624a3c92876f6ebe4619cf78401685235" HandleID="k8s-pod-network.3d4ba11f90b82c0929e6bb6830774f0624a3c92876f6ebe4619cf78401685235" Workload="172--232--0--241-k8s-goldmane--8f77d7b6c--s52mw-eth0" May 17 00:30:58.102317 containerd[1476]: 2025-05-17 00:30:58.080 [INFO][4426] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3d4ba11f90b82c0929e6bb6830774f0624a3c92876f6ebe4619cf78401685235" Namespace="calico-system" Pod="goldmane-8f77d7b6c-s52mw" WorkloadEndpoint="172--232--0--241-k8s-goldmane--8f77d7b6c--s52mw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--0--241-k8s-goldmane--8f77d7b6c--s52mw-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"ee80876b-aa39-4375-a4e1-fd4e85f8d3ee", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 30, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-0-241", ContainerID:"", Pod:"goldmane-8f77d7b6c-s52mw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.114.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5e2011c45fb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:30:58.102317 containerd[1476]: 2025-05-17 00:30:58.081 [INFO][4426] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.133/32] ContainerID="3d4ba11f90b82c0929e6bb6830774f0624a3c92876f6ebe4619cf78401685235" Namespace="calico-system" Pod="goldmane-8f77d7b6c-s52mw" WorkloadEndpoint="172--232--0--241-k8s-goldmane--8f77d7b6c--s52mw-eth0" May 17 00:30:58.102317 containerd[1476]: 2025-05-17 00:30:58.081 [INFO][4426] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5e2011c45fb ContainerID="3d4ba11f90b82c0929e6bb6830774f0624a3c92876f6ebe4619cf78401685235" Namespace="calico-system" Pod="goldmane-8f77d7b6c-s52mw" WorkloadEndpoint="172--232--0--241-k8s-goldmane--8f77d7b6c--s52mw-eth0" May 17 00:30:58.102317 containerd[1476]: 2025-05-17 00:30:58.086 [INFO][4426] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3d4ba11f90b82c0929e6bb6830774f0624a3c92876f6ebe4619cf78401685235" Namespace="calico-system" Pod="goldmane-8f77d7b6c-s52mw" WorkloadEndpoint="172--232--0--241-k8s-goldmane--8f77d7b6c--s52mw-eth0" May 17 00:30:58.102317 containerd[1476]: 2025-05-17 00:30:58.086 [INFO][4426] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3d4ba11f90b82c0929e6bb6830774f0624a3c92876f6ebe4619cf78401685235" Namespace="calico-system" Pod="goldmane-8f77d7b6c-s52mw" WorkloadEndpoint="172--232--0--241-k8s-goldmane--8f77d7b6c--s52mw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--0--241-k8s-goldmane--8f77d7b6c--s52mw-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"ee80876b-aa39-4375-a4e1-fd4e85f8d3ee", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 30, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-0-241", ContainerID:"3d4ba11f90b82c0929e6bb6830774f0624a3c92876f6ebe4619cf78401685235", Pod:"goldmane-8f77d7b6c-s52mw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.114.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5e2011c45fb", MAC:"9a:68:53:bf:91:fc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:30:58.102317 containerd[1476]: 2025-05-17 00:30:58.099 [INFO][4426] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3d4ba11f90b82c0929e6bb6830774f0624a3c92876f6ebe4619cf78401685235" Namespace="calico-system" Pod="goldmane-8f77d7b6c-s52mw" WorkloadEndpoint="172--232--0--241-k8s-goldmane--8f77d7b6c--s52mw-eth0" May 17 00:30:58.119715 containerd[1476]: time="2025-05-17T00:30:58.119628366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:30:58.119868 containerd[1476]: time="2025-05-17T00:30:58.119791377Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:30:58.119963 containerd[1476]: time="2025-05-17T00:30:58.119939447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:30:58.120374 containerd[1476]: time="2025-05-17T00:30:58.120230898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:30:58.139576 systemd[1]: Started cri-containerd-3d4ba11f90b82c0929e6bb6830774f0624a3c92876f6ebe4619cf78401685235.scope - libcontainer container 3d4ba11f90b82c0929e6bb6830774f0624a3c92876f6ebe4619cf78401685235. May 17 00:30:58.192150 systemd-networkd[1399]: cali3cd4f335fc4: Link UP May 17 00:30:58.194247 systemd-networkd[1399]: cali3cd4f335fc4: Gained carrier May 17 00:30:58.204179 containerd[1476]: time="2025-05-17T00:30:58.203877692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-s52mw,Uid:ee80876b-aa39-4375-a4e1-fd4e85f8d3ee,Namespace:calico-system,Attempt:1,} returns sandbox id \"3d4ba11f90b82c0929e6bb6830774f0624a3c92876f6ebe4619cf78401685235\"" May 17 00:30:58.219538 containerd[1476]: 2025-05-17 00:30:57.923 [INFO][4445] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:30:58.219538 containerd[1476]: 2025-05-17 00:30:57.939 [INFO][4445] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--0--241-k8s-calico--apiserver--7cf648ccbb--wj8jt-eth0 calico-apiserver-7cf648ccbb- calico-apiserver 7e0aafb5-c219-4523-9e5b-1fe312a4aa2d 1004 0 2025-05-17 00:30:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7cf648ccbb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-232-0-241 calico-apiserver-7cf648ccbb-wj8jt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3cd4f335fc4 [] [] }} ContainerID="63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323" Namespace="calico-apiserver" Pod="calico-apiserver-7cf648ccbb-wj8jt" WorkloadEndpoint="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--wj8jt-" May 17 00:30:58.219538 containerd[1476]: 2025-05-17 00:30:57.939 [INFO][4445] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323" Namespace="calico-apiserver" Pod="calico-apiserver-7cf648ccbb-wj8jt" WorkloadEndpoint="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--wj8jt-eth0" May 17 00:30:58.219538 containerd[1476]: 2025-05-17 00:30:57.986 [INFO][4479] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323" HandleID="k8s-pod-network.63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323" Workload="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--wj8jt-eth0" May 17 00:30:58.219538 containerd[1476]: 2025-05-17 00:30:57.987 [INFO][4479] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323" HandleID="k8s-pod-network.63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323" Workload="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--wj8jt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c9a60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-232-0-241", "pod":"calico-apiserver-7cf648ccbb-wj8jt", "timestamp":"2025-05-17 00:30:57.976732766 +0000 UTC"}, Hostname:"172-232-0-241", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:30:58.219538 containerd[1476]: 2025-05-17 00:30:57.987 [INFO][4479] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:30:58.219538 containerd[1476]: 2025-05-17 00:30:58.072 [INFO][4479] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:30:58.219538 containerd[1476]: 2025-05-17 00:30:58.072 [INFO][4479] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-0-241' May 17 00:30:58.219538 containerd[1476]: 2025-05-17 00:30:58.142 [INFO][4479] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323" host="172-232-0-241" May 17 00:30:58.219538 containerd[1476]: 2025-05-17 00:30:58.149 [INFO][4479] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-0-241" May 17 00:30:58.219538 containerd[1476]: 2025-05-17 00:30:58.157 [INFO][4479] ipam/ipam.go 511: Trying affinity for 192.168.114.128/26 host="172-232-0-241" May 17 00:30:58.219538 containerd[1476]: 2025-05-17 00:30:58.160 [INFO][4479] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.128/26 host="172-232-0-241" May 17 00:30:58.219538 containerd[1476]: 2025-05-17 00:30:58.164 [INFO][4479] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.128/26 host="172-232-0-241" May 17 00:30:58.219538 containerd[1476]: 2025-05-17 00:30:58.164 [INFO][4479] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.114.128/26 handle="k8s-pod-network.63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323" host="172-232-0-241" May 17 00:30:58.219538 containerd[1476]: 2025-05-17 00:30:58.166 [INFO][4479] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323 May 17 00:30:58.219538 containerd[1476]: 2025-05-17 00:30:58.172 [INFO][4479] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.114.128/26 handle="k8s-pod-network.63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323" host="172-232-0-241" May 17 00:30:58.219538 containerd[1476]: 2025-05-17 00:30:58.181 [INFO][4479] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.114.134/26] block=192.168.114.128/26 handle="k8s-pod-network.63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323" host="172-232-0-241" May 17 00:30:58.219538 containerd[1476]: 2025-05-17 00:30:58.181 [INFO][4479] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.134/26] handle="k8s-pod-network.63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323" host="172-232-0-241" May 17 00:30:58.219538 containerd[1476]: 2025-05-17 00:30:58.181 [INFO][4479] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:30:58.219538 containerd[1476]: 2025-05-17 00:30:58.181 [INFO][4479] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.114.134/26] IPv6=[] ContainerID="63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323" HandleID="k8s-pod-network.63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323" Workload="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--wj8jt-eth0" May 17 00:30:58.220303 containerd[1476]: 2025-05-17 00:30:58.186 [INFO][4445] cni-plugin/k8s.go 418: Populated endpoint ContainerID="63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323" Namespace="calico-apiserver" Pod="calico-apiserver-7cf648ccbb-wj8jt" WorkloadEndpoint="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--wj8jt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--0--241-k8s-calico--apiserver--7cf648ccbb--wj8jt-eth0", GenerateName:"calico-apiserver-7cf648ccbb-", Namespace:"calico-apiserver", SelfLink:"", UID:"7e0aafb5-c219-4523-9e5b-1fe312a4aa2d", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 30, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cf648ccbb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-0-241", ContainerID:"", Pod:"calico-apiserver-7cf648ccbb-wj8jt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3cd4f335fc4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:30:58.220303 containerd[1476]: 2025-05-17 00:30:58.186 [INFO][4445] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.134/32] ContainerID="63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323" Namespace="calico-apiserver" Pod="calico-apiserver-7cf648ccbb-wj8jt" WorkloadEndpoint="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--wj8jt-eth0" May 17 00:30:58.220303 containerd[1476]: 2025-05-17 00:30:58.186 [INFO][4445] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3cd4f335fc4 ContainerID="63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323" Namespace="calico-apiserver" Pod="calico-apiserver-7cf648ccbb-wj8jt" WorkloadEndpoint="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--wj8jt-eth0" May 17 00:30:58.220303 containerd[1476]: 2025-05-17 00:30:58.194 [INFO][4445] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323" Namespace="calico-apiserver" Pod="calico-apiserver-7cf648ccbb-wj8jt" WorkloadEndpoint="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--wj8jt-eth0" May 17 00:30:58.220303 containerd[1476]: 2025-05-17 00:30:58.200 [INFO][4445] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323" Namespace="calico-apiserver" Pod="calico-apiserver-7cf648ccbb-wj8jt" WorkloadEndpoint="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--wj8jt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--0--241-k8s-calico--apiserver--7cf648ccbb--wj8jt-eth0", GenerateName:"calico-apiserver-7cf648ccbb-", Namespace:"calico-apiserver", SelfLink:"", UID:"7e0aafb5-c219-4523-9e5b-1fe312a4aa2d", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 30, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cf648ccbb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-0-241", ContainerID:"63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323", Pod:"calico-apiserver-7cf648ccbb-wj8jt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3cd4f335fc4", MAC:"86:25:73:79:77:2d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:30:58.220303 containerd[1476]: 2025-05-17 00:30:58.216 [INFO][4445] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323" Namespace="calico-apiserver" Pod="calico-apiserver-7cf648ccbb-wj8jt" WorkloadEndpoint="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--wj8jt-eth0" May 17 00:30:58.250334 containerd[1476]: time="2025-05-17T00:30:58.249864798Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:30:58.250334 containerd[1476]: time="2025-05-17T00:30:58.249914328Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:30:58.250334 containerd[1476]: time="2025-05-17T00:30:58.249954808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:30:58.250334 containerd[1476]: time="2025-05-17T00:30:58.250053339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:30:58.283566 systemd[1]: Started cri-containerd-63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323.scope - libcontainer container 63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323. May 17 00:30:58.333562 containerd[1476]: time="2025-05-17T00:30:58.333457942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cf648ccbb-wj8jt,Uid:7e0aafb5-c219-4523-9e5b-1fe312a4aa2d,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323\"" May 17 00:30:58.657586 systemd-networkd[1399]: calid28495967b6: Gained IPv6LL May 17 00:30:58.695239 containerd[1476]: time="2025-05-17T00:30:58.693746575Z" level=info msg="StopPodSandbox for \"c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b\"" May 17 00:30:58.695695 containerd[1476]: time="2025-05-17T00:30:58.695658611Z" level=info msg="StopPodSandbox for \"2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341\"" May 17 00:30:58.770331 systemd[1]: run-netns-cni\x2d468d9140\x2d2d51\x2df1c5\x2d5291\x2dfdd7b79b3e76.mount: Deactivated successfully. May 17 00:30:58.788477 containerd[1476]: 2025-05-17 00:30:58.738 [INFO][4667] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341" May 17 00:30:58.788477 containerd[1476]: 2025-05-17 00:30:58.738 [INFO][4667] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341" iface="eth0" netns="/var/run/netns/cni-daac48ae-d065-e75c-249d-3a5e2846f0f6" May 17 00:30:58.788477 containerd[1476]: 2025-05-17 00:30:58.739 [INFO][4667] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341" iface="eth0" netns="/var/run/netns/cni-daac48ae-d065-e75c-249d-3a5e2846f0f6" May 17 00:30:58.788477 containerd[1476]: 2025-05-17 00:30:58.739 [INFO][4667] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341" iface="eth0" netns="/var/run/netns/cni-daac48ae-d065-e75c-249d-3a5e2846f0f6" May 17 00:30:58.788477 containerd[1476]: 2025-05-17 00:30:58.739 [INFO][4667] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341" May 17 00:30:58.788477 containerd[1476]: 2025-05-17 00:30:58.740 [INFO][4667] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341" May 17 00:30:58.788477 containerd[1476]: 2025-05-17 00:30:58.774 [INFO][4681] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341" HandleID="k8s-pod-network.2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341" Workload="172--232--0--241-k8s-coredns--7c65d6cfc9--wvn6c-eth0" May 17 00:30:58.788477 containerd[1476]: 2025-05-17 00:30:58.774 [INFO][4681] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:30:58.788477 containerd[1476]: 2025-05-17 00:30:58.774 [INFO][4681] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:30:58.788477 containerd[1476]: 2025-05-17 00:30:58.781 [WARNING][4681] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341" HandleID="k8s-pod-network.2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341" Workload="172--232--0--241-k8s-coredns--7c65d6cfc9--wvn6c-eth0" May 17 00:30:58.788477 containerd[1476]: 2025-05-17 00:30:58.781 [INFO][4681] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341" HandleID="k8s-pod-network.2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341" Workload="172--232--0--241-k8s-coredns--7c65d6cfc9--wvn6c-eth0" May 17 00:30:58.788477 containerd[1476]: 2025-05-17 00:30:58.782 [INFO][4681] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:30:58.788477 containerd[1476]: 2025-05-17 00:30:58.785 [INFO][4667] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341" May 17 00:30:58.793172 systemd[1]: run-netns-cni\x2ddaac48ae\x2dd065\x2de75c\x2d249d\x2d3a5e2846f0f6.mount: Deactivated successfully. May 17 00:30:58.793819 containerd[1476]: time="2025-05-17T00:30:58.793683714Z" level=info msg="TearDown network for sandbox \"2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341\" successfully" May 17 00:30:58.793819 containerd[1476]: time="2025-05-17T00:30:58.793719324Z" level=info msg="StopPodSandbox for \"2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341\" returns successfully" May 17 00:30:58.794384 kubelet[2529]: E0517 00:30:58.794092 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:30:58.796023 containerd[1476]: time="2025-05-17T00:30:58.795982692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wvn6c,Uid:02488dc1-7388-4c3e-bda7-2622333fb0c8,Namespace:kube-system,Attempt:1,}" May 17 00:30:58.801830 containerd[1476]: 2025-05-17 00:30:58.746 [INFO][4666] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b" May 17 00:30:58.801830 containerd[1476]: 2025-05-17 00:30:58.746 [INFO][4666] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b" iface="eth0" netns="/var/run/netns/cni-3c34ff7b-b5aa-4afc-a7cc-aa92b2495502" May 17 00:30:58.801830 containerd[1476]: 2025-05-17 00:30:58.746 [INFO][4666] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b" iface="eth0" netns="/var/run/netns/cni-3c34ff7b-b5aa-4afc-a7cc-aa92b2495502" May 17 00:30:58.801830 containerd[1476]: 2025-05-17 00:30:58.747 [INFO][4666] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b" iface="eth0" netns="/var/run/netns/cni-3c34ff7b-b5aa-4afc-a7cc-aa92b2495502" May 17 00:30:58.801830 containerd[1476]: 2025-05-17 00:30:58.747 [INFO][4666] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b" May 17 00:30:58.801830 containerd[1476]: 2025-05-17 00:30:58.747 [INFO][4666] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b" May 17 00:30:58.801830 containerd[1476]: 2025-05-17 00:30:58.777 [INFO][4686] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b" HandleID="k8s-pod-network.c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b" Workload="172--232--0--241-k8s-calico--kube--controllers--96dc47b75--xvwdn-eth0" May 17 00:30:58.801830 containerd[1476]: 2025-05-17 00:30:58.777 [INFO][4686] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:30:58.801830 containerd[1476]: 2025-05-17 00:30:58.782 [INFO][4686] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:30:58.801830 containerd[1476]: 2025-05-17 00:30:58.787 [WARNING][4686] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b" HandleID="k8s-pod-network.c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b" Workload="172--232--0--241-k8s-calico--kube--controllers--96dc47b75--xvwdn-eth0" May 17 00:30:58.801830 containerd[1476]: 2025-05-17 00:30:58.787 [INFO][4686] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b" HandleID="k8s-pod-network.c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b" Workload="172--232--0--241-k8s-calico--kube--controllers--96dc47b75--xvwdn-eth0" May 17 00:30:58.801830 containerd[1476]: 2025-05-17 00:30:58.795 [INFO][4686] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:30:58.801830 containerd[1476]: 2025-05-17 00:30:58.799 [INFO][4666] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b" May 17 00:30:58.808710 containerd[1476]: time="2025-05-17T00:30:58.802023352Z" level=info msg="TearDown network for sandbox \"c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b\" successfully" May 17 00:30:58.808710 containerd[1476]: time="2025-05-17T00:30:58.802040782Z" level=info msg="StopPodSandbox for \"c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b\" returns successfully" May 17 00:30:58.808710 containerd[1476]: time="2025-05-17T00:30:58.802517914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-96dc47b75-xvwdn,Uid:f71e5f0b-7c52-4c28-8833-5eea34a70a67,Namespace:calico-system,Attempt:1,}" May 17 00:30:58.804557 systemd[1]: run-netns-cni\x2d3c34ff7b\x2db5aa\x2d4afc\x2da7cc\x2daa92b2495502.mount: Deactivated successfully. May 17 00:30:58.874546 kubelet[2529]: E0517 00:30:58.874513 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:30:58.930822 systemd-networkd[1399]: cali55893d4ae14: Link UP May 17 00:30:58.933299 systemd-networkd[1399]: cali55893d4ae14: Gained carrier May 17 00:30:58.954214 containerd[1476]: 2025-05-17 00:30:58.854 [INFO][4704] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:30:58.954214 containerd[1476]: 2025-05-17 00:30:58.862 [INFO][4704] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--0--241-k8s-calico--kube--controllers--96dc47b75--xvwdn-eth0 calico-kube-controllers-96dc47b75- calico-system f71e5f0b-7c52-4c28-8833-5eea34a70a67 1023 0 2025-05-17 00:30:39 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:96dc47b75 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-232-0-241 calico-kube-controllers-96dc47b75-xvwdn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali55893d4ae14 [] [] }} ContainerID="2db6fd5bc05eab87414552b4c4e505d5a6f18813d4634992c73f4122e73d4738" Namespace="calico-system" Pod="calico-kube-controllers-96dc47b75-xvwdn" WorkloadEndpoint="172--232--0--241-k8s-calico--kube--controllers--96dc47b75--xvwdn-" May 17 00:30:58.954214 containerd[1476]: 2025-05-17 00:30:58.862 [INFO][4704] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2db6fd5bc05eab87414552b4c4e505d5a6f18813d4634992c73f4122e73d4738" Namespace="calico-system" Pod="calico-kube-controllers-96dc47b75-xvwdn" WorkloadEndpoint="172--232--0--241-k8s-calico--kube--controllers--96dc47b75--xvwdn-eth0" May 17 00:30:58.954214 containerd[1476]: 2025-05-17 00:30:58.888 [INFO][4723] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2db6fd5bc05eab87414552b4c4e505d5a6f18813d4634992c73f4122e73d4738" HandleID="k8s-pod-network.2db6fd5bc05eab87414552b4c4e505d5a6f18813d4634992c73f4122e73d4738" Workload="172--232--0--241-k8s-calico--kube--controllers--96dc47b75--xvwdn-eth0" May 17 00:30:58.954214 containerd[1476]: 2025-05-17 00:30:58.888 [INFO][4723] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2db6fd5bc05eab87414552b4c4e505d5a6f18813d4634992c73f4122e73d4738" HandleID="k8s-pod-network.2db6fd5bc05eab87414552b4c4e505d5a6f18813d4634992c73f4122e73d4738" Workload="172--232--0--241-k8s-calico--kube--controllers--96dc47b75--xvwdn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e3050), Attrs:map[string]string{"namespace":"calico-system", "node":"172-232-0-241", "pod":"calico-kube-controllers-96dc47b75-xvwdn", "timestamp":"2025-05-17 00:30:58.888133234 +0000 UTC"}, Hostname:"172-232-0-241", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:30:58.954214 containerd[1476]: 2025-05-17 00:30:58.888 [INFO][4723] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:30:58.954214 containerd[1476]: 2025-05-17 00:30:58.888 [INFO][4723] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:30:58.954214 containerd[1476]: 2025-05-17 00:30:58.888 [INFO][4723] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-0-241' May 17 00:30:58.954214 containerd[1476]: 2025-05-17 00:30:58.894 [INFO][4723] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2db6fd5bc05eab87414552b4c4e505d5a6f18813d4634992c73f4122e73d4738" host="172-232-0-241" May 17 00:30:58.954214 containerd[1476]: 2025-05-17 00:30:58.898 [INFO][4723] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-0-241" May 17 00:30:58.954214 containerd[1476]: 2025-05-17 00:30:58.903 [INFO][4723] ipam/ipam.go 511: Trying affinity for 192.168.114.128/26 host="172-232-0-241" May 17 00:30:58.954214 containerd[1476]: 2025-05-17 00:30:58.906 [INFO][4723] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.128/26 host="172-232-0-241" May 17 00:30:58.954214 containerd[1476]: 2025-05-17 00:30:58.909 [INFO][4723] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.128/26 host="172-232-0-241" May 17 00:30:58.954214 containerd[1476]: 2025-05-17 00:30:58.909 [INFO][4723] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.114.128/26 handle="k8s-pod-network.2db6fd5bc05eab87414552b4c4e505d5a6f18813d4634992c73f4122e73d4738" host="172-232-0-241" May 17 00:30:58.954214 containerd[1476]: 2025-05-17 00:30:58.911 [INFO][4723] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2db6fd5bc05eab87414552b4c4e505d5a6f18813d4634992c73f4122e73d4738 May 17 00:30:58.954214 containerd[1476]: 2025-05-17 00:30:58.915 [INFO][4723] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.114.128/26 handle="k8s-pod-network.2db6fd5bc05eab87414552b4c4e505d5a6f18813d4634992c73f4122e73d4738" host="172-232-0-241" May 17 00:30:58.954214 containerd[1476]: 2025-05-17 00:30:58.921 [INFO][4723] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.114.135/26] block=192.168.114.128/26 handle="k8s-pod-network.2db6fd5bc05eab87414552b4c4e505d5a6f18813d4634992c73f4122e73d4738" host="172-232-0-241" May 17 00:30:58.954214 containerd[1476]: 2025-05-17 00:30:58.921 [INFO][4723] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.135/26] handle="k8s-pod-network.2db6fd5bc05eab87414552b4c4e505d5a6f18813d4634992c73f4122e73d4738" host="172-232-0-241" May 17 00:30:58.954214 containerd[1476]: 2025-05-17 00:30:58.921 [INFO][4723] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:30:58.954214 containerd[1476]: 2025-05-17 00:30:58.921 [INFO][4723] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.114.135/26] IPv6=[] ContainerID="2db6fd5bc05eab87414552b4c4e505d5a6f18813d4634992c73f4122e73d4738" HandleID="k8s-pod-network.2db6fd5bc05eab87414552b4c4e505d5a6f18813d4634992c73f4122e73d4738" Workload="172--232--0--241-k8s-calico--kube--controllers--96dc47b75--xvwdn-eth0" May 17 00:30:58.954625 containerd[1476]: 2025-05-17 00:30:58.924 [INFO][4704] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2db6fd5bc05eab87414552b4c4e505d5a6f18813d4634992c73f4122e73d4738" Namespace="calico-system" Pod="calico-kube-controllers-96dc47b75-xvwdn" WorkloadEndpoint="172--232--0--241-k8s-calico--kube--controllers--96dc47b75--xvwdn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--0--241-k8s-calico--kube--controllers--96dc47b75--xvwdn-eth0", GenerateName:"calico-kube-controllers-96dc47b75-", Namespace:"calico-system", SelfLink:"", UID:"f71e5f0b-7c52-4c28-8833-5eea34a70a67", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 30, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"96dc47b75", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-0-241", ContainerID:"", Pod:"calico-kube-controllers-96dc47b75-xvwdn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.114.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali55893d4ae14", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:30:58.954625 containerd[1476]: 2025-05-17 00:30:58.924 [INFO][4704] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.135/32] ContainerID="2db6fd5bc05eab87414552b4c4e505d5a6f18813d4634992c73f4122e73d4738" Namespace="calico-system" Pod="calico-kube-controllers-96dc47b75-xvwdn" WorkloadEndpoint="172--232--0--241-k8s-calico--kube--controllers--96dc47b75--xvwdn-eth0" May 17 00:30:58.954625 containerd[1476]: 2025-05-17 00:30:58.924 [INFO][4704] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali55893d4ae14 ContainerID="2db6fd5bc05eab87414552b4c4e505d5a6f18813d4634992c73f4122e73d4738" Namespace="calico-system" Pod="calico-kube-controllers-96dc47b75-xvwdn" WorkloadEndpoint="172--232--0--241-k8s-calico--kube--controllers--96dc47b75--xvwdn-eth0" May 17 00:30:58.954625 containerd[1476]: 2025-05-17 00:30:58.934 [INFO][4704] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2db6fd5bc05eab87414552b4c4e505d5a6f18813d4634992c73f4122e73d4738" Namespace="calico-system" Pod="calico-kube-controllers-96dc47b75-xvwdn" WorkloadEndpoint="172--232--0--241-k8s-calico--kube--controllers--96dc47b75--xvwdn-eth0" May 17 00:30:58.954625 containerd[1476]: 2025-05-17 00:30:58.934 [INFO][4704] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2db6fd5bc05eab87414552b4c4e505d5a6f18813d4634992c73f4122e73d4738" Namespace="calico-system" Pod="calico-kube-controllers-96dc47b75-xvwdn" WorkloadEndpoint="172--232--0--241-k8s-calico--kube--controllers--96dc47b75--xvwdn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--0--241-k8s-calico--kube--controllers--96dc47b75--xvwdn-eth0", GenerateName:"calico-kube-controllers-96dc47b75-", Namespace:"calico-system", SelfLink:"", UID:"f71e5f0b-7c52-4c28-8833-5eea34a70a67", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 30, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"96dc47b75", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-0-241", ContainerID:"2db6fd5bc05eab87414552b4c4e505d5a6f18813d4634992c73f4122e73d4738", Pod:"calico-kube-controllers-96dc47b75-xvwdn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.114.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali55893d4ae14", MAC:"42:ec:a1:2a:02:fc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:30:58.954625 containerd[1476]: 2025-05-17 00:30:58.951 [INFO][4704] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2db6fd5bc05eab87414552b4c4e505d5a6f18813d4634992c73f4122e73d4738" Namespace="calico-system" Pod="calico-kube-controllers-96dc47b75-xvwdn" WorkloadEndpoint="172--232--0--241-k8s-calico--kube--controllers--96dc47b75--xvwdn-eth0" May 17 00:30:58.985334 containerd[1476]: time="2025-05-17T00:30:58.985235964Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:30:58.986081 containerd[1476]: time="2025-05-17T00:30:58.985858496Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:30:58.986081 containerd[1476]: time="2025-05-17T00:30:58.985899276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:30:58.986081 containerd[1476]: time="2025-05-17T00:30:58.986021317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:30:59.018685 systemd[1]: Started cri-containerd-2db6fd5bc05eab87414552b4c4e505d5a6f18813d4634992c73f4122e73d4738.scope - libcontainer container 2db6fd5bc05eab87414552b4c4e505d5a6f18813d4634992c73f4122e73d4738. May 17 00:30:59.041551 systemd-networkd[1399]: cali4dbc812d449: Gained IPv6LL May 17 00:30:59.060604 systemd-networkd[1399]: cali402c77c3fb8: Link UP May 17 00:30:59.060817 systemd-networkd[1399]: cali402c77c3fb8: Gained carrier May 17 00:30:59.076879 containerd[1476]: 2025-05-17 00:30:58.848 [INFO][4694] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:30:59.076879 containerd[1476]: 2025-05-17 00:30:58.860 [INFO][4694] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--0--241-k8s-coredns--7c65d6cfc9--wvn6c-eth0 coredns-7c65d6cfc9- kube-system 02488dc1-7388-4c3e-bda7-2622333fb0c8 1022 0 2025-05-17 00:30:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-232-0-241 coredns-7c65d6cfc9-wvn6c eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali402c77c3fb8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="89aea15238afcffdcae05dcbfc9b363a5cdb8bc029502c966ac085d211253fcf" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wvn6c" WorkloadEndpoint="172--232--0--241-k8s-coredns--7c65d6cfc9--wvn6c-" May 17 00:30:59.076879 containerd[1476]: 2025-05-17 00:30:58.860 [INFO][4694] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="89aea15238afcffdcae05dcbfc9b363a5cdb8bc029502c966ac085d211253fcf" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wvn6c" WorkloadEndpoint="172--232--0--241-k8s-coredns--7c65d6cfc9--wvn6c-eth0" May 17 00:30:59.076879 containerd[1476]: 2025-05-17 00:30:58.908 [INFO][4718] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="89aea15238afcffdcae05dcbfc9b363a5cdb8bc029502c966ac085d211253fcf" HandleID="k8s-pod-network.89aea15238afcffdcae05dcbfc9b363a5cdb8bc029502c966ac085d211253fcf" Workload="172--232--0--241-k8s-coredns--7c65d6cfc9--wvn6c-eth0" May 17 00:30:59.076879 containerd[1476]: 2025-05-17 00:30:58.909 [INFO][4718] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="89aea15238afcffdcae05dcbfc9b363a5cdb8bc029502c966ac085d211253fcf" HandleID="k8s-pod-network.89aea15238afcffdcae05dcbfc9b363a5cdb8bc029502c966ac085d211253fcf" Workload="172--232--0--241-k8s-coredns--7c65d6cfc9--wvn6c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000235020), Attrs:map[string]string{"namespace":"kube-system", "node":"172-232-0-241", "pod":"coredns-7c65d6cfc9-wvn6c", "timestamp":"2025-05-17 00:30:58.908888895 +0000 UTC"}, Hostname:"172-232-0-241", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:30:59.076879 containerd[1476]: 2025-05-17 00:30:58.909 [INFO][4718] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:30:59.076879 containerd[1476]: 2025-05-17 00:30:58.921 [INFO][4718] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:30:59.076879 containerd[1476]: 2025-05-17 00:30:58.921 [INFO][4718] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-0-241' May 17 00:30:59.076879 containerd[1476]: 2025-05-17 00:30:58.997 [INFO][4718] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.89aea15238afcffdcae05dcbfc9b363a5cdb8bc029502c966ac085d211253fcf" host="172-232-0-241" May 17 00:30:59.076879 containerd[1476]: 2025-05-17 00:30:59.008 [INFO][4718] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-0-241" May 17 00:30:59.076879 containerd[1476]: 2025-05-17 00:30:59.014 [INFO][4718] ipam/ipam.go 511: Trying affinity for 192.168.114.128/26 host="172-232-0-241" May 17 00:30:59.076879 containerd[1476]: 2025-05-17 00:30:59.019 [INFO][4718] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.128/26 host="172-232-0-241" May 17 00:30:59.076879 containerd[1476]: 2025-05-17 00:30:59.021 [INFO][4718] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.128/26 host="172-232-0-241" May 17 00:30:59.076879 containerd[1476]: 2025-05-17 00:30:59.021 [INFO][4718] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.114.128/26 handle="k8s-pod-network.89aea15238afcffdcae05dcbfc9b363a5cdb8bc029502c966ac085d211253fcf" host="172-232-0-241" May 17 00:30:59.076879 containerd[1476]: 2025-05-17 00:30:59.022 [INFO][4718] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.89aea15238afcffdcae05dcbfc9b363a5cdb8bc029502c966ac085d211253fcf May 17 00:30:59.076879 containerd[1476]: 2025-05-17 00:30:59.027 [INFO][4718] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.114.128/26 handle="k8s-pod-network.89aea15238afcffdcae05dcbfc9b363a5cdb8bc029502c966ac085d211253fcf" host="172-232-0-241" May 17 00:30:59.076879 containerd[1476]: 2025-05-17 00:30:59.044 [INFO][4718] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.114.136/26] block=192.168.114.128/26 handle="k8s-pod-network.89aea15238afcffdcae05dcbfc9b363a5cdb8bc029502c966ac085d211253fcf" host="172-232-0-241" May 17 00:30:59.076879 containerd[1476]: 2025-05-17 00:30:59.044 [INFO][4718] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.136/26] handle="k8s-pod-network.89aea15238afcffdcae05dcbfc9b363a5cdb8bc029502c966ac085d211253fcf" host="172-232-0-241" May 17 00:30:59.076879 containerd[1476]: 2025-05-17 00:30:59.044 [INFO][4718] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:30:59.076879 containerd[1476]: 2025-05-17 00:30:59.044 [INFO][4718] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.114.136/26] IPv6=[] ContainerID="89aea15238afcffdcae05dcbfc9b363a5cdb8bc029502c966ac085d211253fcf" HandleID="k8s-pod-network.89aea15238afcffdcae05dcbfc9b363a5cdb8bc029502c966ac085d211253fcf" Workload="172--232--0--241-k8s-coredns--7c65d6cfc9--wvn6c-eth0" May 17 00:30:59.077307 containerd[1476]: 2025-05-17 00:30:59.051 [INFO][4694] cni-plugin/k8s.go 418: Populated endpoint ContainerID="89aea15238afcffdcae05dcbfc9b363a5cdb8bc029502c966ac085d211253fcf" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wvn6c" WorkloadEndpoint="172--232--0--241-k8s-coredns--7c65d6cfc9--wvn6c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--0--241-k8s-coredns--7c65d6cfc9--wvn6c-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"02488dc1-7388-4c3e-bda7-2622333fb0c8", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 30, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-0-241", ContainerID:"", Pod:"coredns-7c65d6cfc9-wvn6c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali402c77c3fb8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:30:59.077307 containerd[1476]: 2025-05-17 00:30:59.051 [INFO][4694] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.136/32] ContainerID="89aea15238afcffdcae05dcbfc9b363a5cdb8bc029502c966ac085d211253fcf" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wvn6c" WorkloadEndpoint="172--232--0--241-k8s-coredns--7c65d6cfc9--wvn6c-eth0" May 17 00:30:59.077307 containerd[1476]: 2025-05-17 00:30:59.051 [INFO][4694] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali402c77c3fb8 ContainerID="89aea15238afcffdcae05dcbfc9b363a5cdb8bc029502c966ac085d211253fcf" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wvn6c" WorkloadEndpoint="172--232--0--241-k8s-coredns--7c65d6cfc9--wvn6c-eth0" May 17 00:30:59.077307 containerd[1476]: 2025-05-17 00:30:59.060 [INFO][4694] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="89aea15238afcffdcae05dcbfc9b363a5cdb8bc029502c966ac085d211253fcf" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wvn6c" WorkloadEndpoint="172--232--0--241-k8s-coredns--7c65d6cfc9--wvn6c-eth0" May 17 00:30:59.077307 containerd[1476]: 2025-05-17 00:30:59.061 [INFO][4694] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="89aea15238afcffdcae05dcbfc9b363a5cdb8bc029502c966ac085d211253fcf" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wvn6c" WorkloadEndpoint="172--232--0--241-k8s-coredns--7c65d6cfc9--wvn6c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--0--241-k8s-coredns--7c65d6cfc9--wvn6c-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"02488dc1-7388-4c3e-bda7-2622333fb0c8", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 30, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-0-241", ContainerID:"89aea15238afcffdcae05dcbfc9b363a5cdb8bc029502c966ac085d211253fcf", Pod:"coredns-7c65d6cfc9-wvn6c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali402c77c3fb8", MAC:"86:ae:7f:a6:b3:ba", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:30:59.077307 containerd[1476]: 2025-05-17 00:30:59.072 [INFO][4694] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="89aea15238afcffdcae05dcbfc9b363a5cdb8bc029502c966ac085d211253fcf" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wvn6c" WorkloadEndpoint="172--232--0--241-k8s-coredns--7c65d6cfc9--wvn6c-eth0" May 17 00:30:59.100472 containerd[1476]: time="2025-05-17T00:30:59.099308720Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:30:59.100472 containerd[1476]: time="2025-05-17T00:30:59.100051652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:30:59.100472 containerd[1476]: time="2025-05-17T00:30:59.100063122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:30:59.100472 containerd[1476]: time="2025-05-17T00:30:59.100120613Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:30:59.126645 systemd[1]: Started cri-containerd-89aea15238afcffdcae05dcbfc9b363a5cdb8bc029502c966ac085d211253fcf.scope - libcontainer container 89aea15238afcffdcae05dcbfc9b363a5cdb8bc029502c966ac085d211253fcf. May 17 00:30:59.128035 containerd[1476]: time="2025-05-17T00:30:59.127980281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-96dc47b75-xvwdn,Uid:f71e5f0b-7c52-4c28-8833-5eea34a70a67,Namespace:calico-system,Attempt:1,} returns sandbox id \"2db6fd5bc05eab87414552b4c4e505d5a6f18813d4634992c73f4122e73d4738\"" May 17 00:30:59.166309 containerd[1476]: time="2025-05-17T00:30:59.166273273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wvn6c,Uid:02488dc1-7388-4c3e-bda7-2622333fb0c8,Namespace:kube-system,Attempt:1,} returns sandbox id \"89aea15238afcffdcae05dcbfc9b363a5cdb8bc029502c966ac085d211253fcf\"" May 17 00:30:59.166865 kubelet[2529]: E0517 00:30:59.166844 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:30:59.169341 containerd[1476]: time="2025-05-17T00:30:59.169305903Z" level=info msg="CreateContainer within sandbox \"89aea15238afcffdcae05dcbfc9b363a5cdb8bc029502c966ac085d211253fcf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:30:59.179171 containerd[1476]: time="2025-05-17T00:30:59.179133484Z" level=info msg="CreateContainer within sandbox \"89aea15238afcffdcae05dcbfc9b363a5cdb8bc029502c966ac085d211253fcf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7a52879686b1759afba56ffe6677758d5acef02959b99b2f9e005ef08414937b\"" May 17 00:30:59.179613 containerd[1476]: time="2025-05-17T00:30:59.179517935Z" level=info msg="StartContainer for \"7a52879686b1759afba56ffe6677758d5acef02959b99b2f9e005ef08414937b\"" May 17 00:30:59.204668 systemd[1]: Started cri-containerd-7a52879686b1759afba56ffe6677758d5acef02959b99b2f9e005ef08414937b.scope - libcontainer container 7a52879686b1759afba56ffe6677758d5acef02959b99b2f9e005ef08414937b. May 17 00:30:59.232319 containerd[1476]: time="2025-05-17T00:30:59.232289143Z" level=info msg="StartContainer for \"7a52879686b1759afba56ffe6677758d5acef02959b99b2f9e005ef08414937b\" returns successfully" May 17 00:30:59.617573 systemd-networkd[1399]: cali3cd4f335fc4: Gained IPv6LL May 17 00:30:59.681546 systemd-networkd[1399]: cali5e2011c45fb: Gained IPv6LL May 17 00:30:59.882837 kubelet[2529]: E0517 00:30:59.882515 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:30:59.908032 kubelet[2529]: I0517 00:30:59.907916 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-wvn6c" podStartSLOduration=31.907902403 podStartE2EDuration="31.907902403s" podCreationTimestamp="2025-05-17 00:30:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:30:59.89760773 +0000 UTC m=+37.298097531" watchObservedRunningTime="2025-05-17 00:30:59.907902403 +0000 UTC m=+37.308392204" May 17 00:30:59.977909 containerd[1476]: time="2025-05-17T00:30:59.977874705Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:59.978595 containerd[1476]: time="2025-05-17T00:30:59.978473917Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=47252431" May 17 00:30:59.979157 containerd[1476]: time="2025-05-17T00:30:59.978936979Z" level=info msg="ImageCreate event name:\"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:59.980920 containerd[1476]: time="2025-05-17T00:30:59.980895605Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:30:59.981243 containerd[1476]: time="2025-05-17T00:30:59.981211006Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 2.951168136s" May 17 00:30:59.981243 containerd[1476]: time="2025-05-17T00:30:59.981239446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 17 00:30:59.982869 containerd[1476]: time="2025-05-17T00:30:59.982763381Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 00:30:59.983621 containerd[1476]: time="2025-05-17T00:30:59.983593424Z" level=info msg="CreateContainer within sandbox \"65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:31:00.005506 containerd[1476]: time="2025-05-17T00:31:00.005482032Z" level=info msg="CreateContainer within sandbox \"65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2f947d674d9dfc795964812d5ce799be1215fbf530f94d6c1ff5ab080474e145\"" May 17 00:31:00.005851 containerd[1476]: time="2025-05-17T00:31:00.005813963Z" level=info msg="StartContainer for \"2f947d674d9dfc795964812d5ce799be1215fbf530f94d6c1ff5ab080474e145\"" May 17 00:31:00.041548 systemd[1]: Started cri-containerd-2f947d674d9dfc795964812d5ce799be1215fbf530f94d6c1ff5ab080474e145.scope - libcontainer container 2f947d674d9dfc795964812d5ce799be1215fbf530f94d6c1ff5ab080474e145. May 17 00:31:00.075406 containerd[1476]: time="2025-05-17T00:31:00.075377001Z" level=info msg="StartContainer for \"2f947d674d9dfc795964812d5ce799be1215fbf530f94d6c1ff5ab080474e145\" returns successfully" May 17 00:31:00.154029 containerd[1476]: time="2025-05-17T00:31:00.153953395Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:31:00.155074 containerd[1476]: time="2025-05-17T00:31:00.155038668Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=77" May 17 00:31:00.156119 containerd[1476]: time="2025-05-17T00:31:00.156092732Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 173.07402ms" May 17 00:31:00.156119 containerd[1476]: time="2025-05-17T00:31:00.156117142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 17 00:31:00.158037 containerd[1476]: time="2025-05-17T00:31:00.158016987Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:31:00.158835 containerd[1476]: time="2025-05-17T00:31:00.158810090Z" level=info msg="CreateContainer within sandbox \"dca13b1dac7873ae1c93b71a9e6942a1a9ad75d63588c0df6457ea5e166425b6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:31:00.174682 containerd[1476]: time="2025-05-17T00:31:00.174580647Z" level=info msg="CreateContainer within sandbox \"dca13b1dac7873ae1c93b71a9e6942a1a9ad75d63588c0df6457ea5e166425b6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5ae4bfc77e9e5298e1dc331ebce91a34d573ae749ba5673ab9873ce3483f8002\"" May 17 00:31:00.175074 containerd[1476]: time="2025-05-17T00:31:00.175049818Z" level=info msg="StartContainer for \"5ae4bfc77e9e5298e1dc331ebce91a34d573ae749ba5673ab9873ce3483f8002\"" May 17 00:31:00.207528 systemd[1]: Started cri-containerd-5ae4bfc77e9e5298e1dc331ebce91a34d573ae749ba5673ab9873ce3483f8002.scope - libcontainer container 5ae4bfc77e9e5298e1dc331ebce91a34d573ae749ba5673ab9873ce3483f8002. May 17 00:31:00.258852 containerd[1476]: time="2025-05-17T00:31:00.256287570Z" level=info msg="StartContainer for \"5ae4bfc77e9e5298e1dc331ebce91a34d573ae749ba5673ab9873ce3483f8002\" returns successfully" May 17 00:31:00.264245 containerd[1476]: time="2025-05-17T00:31:00.264220724Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:31:00.264846 containerd[1476]: time="2025-05-17T00:31:00.264809496Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:31:00.264926 containerd[1476]: time="2025-05-17T00:31:00.264874826Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:31:00.265061 kubelet[2529]: E0517 00:31:00.265010 2529 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:31:00.265137 kubelet[2529]: E0517 00:31:00.265065 2529 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:31:00.265408 containerd[1476]: time="2025-05-17T00:31:00.265354228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 00:31:00.267053 kubelet[2529]: E0517 00:31:00.267008 2529 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bnr67,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-s52mw_calico-system(ee80876b-aa39-4375-a4e1-fd4e85f8d3ee): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:31:00.268683 kubelet[2529]: E0517 00:31:00.268646 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-s52mw" podUID="ee80876b-aa39-4375-a4e1-fd4e85f8d3ee" May 17 00:31:00.422755 containerd[1476]: time="2025-05-17T00:31:00.422272056Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:31:00.423741 containerd[1476]: time="2025-05-17T00:31:00.423709980Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=77" May 17 00:31:00.425043 containerd[1476]: time="2025-05-17T00:31:00.424826053Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 159.417125ms" May 17 00:31:00.425043 containerd[1476]: time="2025-05-17T00:31:00.424851013Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 17 00:31:00.426604 containerd[1476]: time="2025-05-17T00:31:00.426549268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 17 00:31:00.429613 containerd[1476]: time="2025-05-17T00:31:00.429570307Z" level=info msg="CreateContainer within sandbox \"63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:31:00.455408 containerd[1476]: time="2025-05-17T00:31:00.455385564Z" level=info msg="CreateContainer within sandbox \"63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"132ad095d234b6a521180295f049f43fbe4ae9c298f193433f3981143619b362\"" May 17 00:31:00.456897 containerd[1476]: time="2025-05-17T00:31:00.456777929Z" level=info msg="StartContainer for \"132ad095d234b6a521180295f049f43fbe4ae9c298f193433f3981143619b362\"" May 17 00:31:00.489675 systemd[1]: Started cri-containerd-132ad095d234b6a521180295f049f43fbe4ae9c298f193433f3981143619b362.scope - libcontainer container 132ad095d234b6a521180295f049f43fbe4ae9c298f193433f3981143619b362. May 17 00:31:00.545973 containerd[1476]: time="2025-05-17T00:31:00.545939994Z" level=info msg="StartContainer for \"132ad095d234b6a521180295f049f43fbe4ae9c298f193433f3981143619b362\" returns successfully" May 17 00:31:00.577634 systemd-networkd[1399]: cali402c77c3fb8: Gained IPv6LL May 17 00:31:00.694813 containerd[1476]: time="2025-05-17T00:31:00.694703358Z" level=info msg="StopPodSandbox for \"e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86\"" May 17 00:31:00.706773 systemd-networkd[1399]: cali55893d4ae14: Gained IPv6LL May 17 00:31:00.826744 containerd[1476]: 2025-05-17 00:31:00.763 [INFO][5045] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86" May 17 00:31:00.826744 containerd[1476]: 2025-05-17 00:31:00.763 [INFO][5045] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86" iface="eth0" netns="/var/run/netns/cni-4a5730c9-360e-698d-1889-7ae7cc500bb5" May 17 00:31:00.826744 containerd[1476]: 2025-05-17 00:31:00.763 [INFO][5045] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86" iface="eth0" netns="/var/run/netns/cni-4a5730c9-360e-698d-1889-7ae7cc500bb5" May 17 00:31:00.826744 containerd[1476]: 2025-05-17 00:31:00.763 [INFO][5045] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86" iface="eth0" netns="/var/run/netns/cni-4a5730c9-360e-698d-1889-7ae7cc500bb5" May 17 00:31:00.826744 containerd[1476]: 2025-05-17 00:31:00.763 [INFO][5045] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86" May 17 00:31:00.826744 containerd[1476]: 2025-05-17 00:31:00.763 [INFO][5045] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86" May 17 00:31:00.826744 containerd[1476]: 2025-05-17 00:31:00.814 [INFO][5052] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86" HandleID="k8s-pod-network.e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86" Workload="172--232--0--241-k8s-csi--node--driver--h9kj7-eth0" May 17 00:31:00.826744 containerd[1476]: 2025-05-17 00:31:00.815 [INFO][5052] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:31:00.826744 containerd[1476]: 2025-05-17 00:31:00.815 [INFO][5052] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:31:00.826744 containerd[1476]: 2025-05-17 00:31:00.820 [WARNING][5052] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86" HandleID="k8s-pod-network.e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86" Workload="172--232--0--241-k8s-csi--node--driver--h9kj7-eth0" May 17 00:31:00.826744 containerd[1476]: 2025-05-17 00:31:00.821 [INFO][5052] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86" HandleID="k8s-pod-network.e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86" Workload="172--232--0--241-k8s-csi--node--driver--h9kj7-eth0" May 17 00:31:00.826744 containerd[1476]: 2025-05-17 00:31:00.821 [INFO][5052] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:31:00.826744 containerd[1476]: 2025-05-17 00:31:00.824 [INFO][5045] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86" May 17 00:31:00.827677 containerd[1476]: time="2025-05-17T00:31:00.827542274Z" level=info msg="TearDown network for sandbox \"e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86\" successfully" May 17 00:31:00.827677 containerd[1476]: time="2025-05-17T00:31:00.827583074Z" level=info msg="StopPodSandbox for \"e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86\" returns successfully" May 17 00:31:00.832103 systemd[1]: run-netns-cni\x2d4a5730c9\x2d360e\x2d698d\x2d1889\x2d7ae7cc500bb5.mount: Deactivated successfully. May 17 00:31:00.840920 containerd[1476]: time="2025-05-17T00:31:00.840579823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h9kj7,Uid:0996e84d-dd0b-49e3-addd-0931e48a258e,Namespace:calico-system,Attempt:1,}" May 17 00:31:00.901313 kubelet[2529]: E0517 00:31:00.901260 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:31:00.903682 kubelet[2529]: E0517 00:31:00.903666 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-s52mw" podUID="ee80876b-aa39-4375-a4e1-fd4e85f8d3ee" May 17 00:31:00.924954 kubelet[2529]: I0517 00:31:00.924874 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-59c6b49969-lmb87" podStartSLOduration=21.853249081 podStartE2EDuration="23.924865065s" podCreationTimestamp="2025-05-17 00:30:37 +0000 UTC" firstStartedPulling="2025-05-17 00:30:58.085069369 +0000 UTC m=+35.485559180" lastFinishedPulling="2025-05-17 00:31:00.156685373 +0000 UTC m=+37.557175164" observedRunningTime="2025-05-17 00:31:00.90974419 +0000 UTC m=+38.310233981" watchObservedRunningTime="2025-05-17 00:31:00.924865065 +0000 UTC m=+38.325354866" May 17 00:31:00.949278 kubelet[2529]: I0517 00:31:00.948645 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7cf648ccbb-wj8jt" podStartSLOduration=22.858145858 podStartE2EDuration="24.948628086s" podCreationTimestamp="2025-05-17 00:30:36 +0000 UTC" firstStartedPulling="2025-05-17 00:30:58.335362858 +0000 UTC m=+35.735852659" lastFinishedPulling="2025-05-17 00:31:00.425845086 +0000 UTC m=+37.826334887" observedRunningTime="2025-05-17 00:31:00.926300309 +0000 UTC m=+38.326790110" watchObservedRunningTime="2025-05-17 00:31:00.948628086 +0000 UTC m=+38.349117887" May 17 00:31:00.986870 systemd-networkd[1399]: calic47f256aa13: Link UP May 17 00:31:00.987094 systemd-networkd[1399]: calic47f256aa13: Gained carrier May 17 00:31:00.995197 kubelet[2529]: I0517 00:31:00.994997 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7cf648ccbb-chqjq" podStartSLOduration=22.038524849 podStartE2EDuration="24.994979354s" podCreationTimestamp="2025-05-17 00:30:36 +0000 UTC" firstStartedPulling="2025-05-17 00:30:57.025685604 +0000 UTC m=+34.426175405" lastFinishedPulling="2025-05-17 00:30:59.982140109 +0000 UTC m=+37.382629910" observedRunningTime="2025-05-17 00:31:00.966594179 +0000 UTC m=+38.367083970" watchObservedRunningTime="2025-05-17 00:31:00.994979354 +0000 UTC m=+38.395469155" May 17 00:31:01.003484 containerd[1476]: 2025-05-17 00:31:00.872 [INFO][5063] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:31:01.003484 containerd[1476]: 2025-05-17 00:31:00.880 [INFO][5063] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--0--241-k8s-csi--node--driver--h9kj7-eth0 csi-node-driver- calico-system 0996e84d-dd0b-49e3-addd-0931e48a258e 1061 0 2025-05-17 00:30:39 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:68bf44dd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-232-0-241 csi-node-driver-h9kj7 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic47f256aa13 [] [] }} ContainerID="3430b6a776e3c50bb388d2267fb80c3edd4fc6178e70a9f248ed9d7b230af993" Namespace="calico-system" Pod="csi-node-driver-h9kj7" WorkloadEndpoint="172--232--0--241-k8s-csi--node--driver--h9kj7-" May 17 00:31:01.003484 containerd[1476]: 2025-05-17 00:31:00.880 [INFO][5063] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3430b6a776e3c50bb388d2267fb80c3edd4fc6178e70a9f248ed9d7b230af993" Namespace="calico-system" Pod="csi-node-driver-h9kj7" WorkloadEndpoint="172--232--0--241-k8s-csi--node--driver--h9kj7-eth0" May 17 00:31:01.003484 containerd[1476]: 2025-05-17 00:31:00.930 [INFO][5071] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3430b6a776e3c50bb388d2267fb80c3edd4fc6178e70a9f248ed9d7b230af993" HandleID="k8s-pod-network.3430b6a776e3c50bb388d2267fb80c3edd4fc6178e70a9f248ed9d7b230af993" Workload="172--232--0--241-k8s-csi--node--driver--h9kj7-eth0" May 17 00:31:01.003484 containerd[1476]: 2025-05-17 00:31:00.930 [INFO][5071] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3430b6a776e3c50bb388d2267fb80c3edd4fc6178e70a9f248ed9d7b230af993" HandleID="k8s-pod-network.3430b6a776e3c50bb388d2267fb80c3edd4fc6178e70a9f248ed9d7b230af993" Workload="172--232--0--241-k8s-csi--node--driver--h9kj7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f920), Attrs:map[string]string{"namespace":"calico-system", "node":"172-232-0-241", "pod":"csi-node-driver-h9kj7", "timestamp":"2025-05-17 00:31:00.930564762 +0000 UTC"}, Hostname:"172-232-0-241", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:31:01.003484 containerd[1476]: 2025-05-17 00:31:00.930 [INFO][5071] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:31:01.003484 containerd[1476]: 2025-05-17 00:31:00.930 [INFO][5071] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:31:01.003484 containerd[1476]: 2025-05-17 00:31:00.930 [INFO][5071] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-0-241' May 17 00:31:01.003484 containerd[1476]: 2025-05-17 00:31:00.942 [INFO][5071] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3430b6a776e3c50bb388d2267fb80c3edd4fc6178e70a9f248ed9d7b230af993" host="172-232-0-241" May 17 00:31:01.003484 containerd[1476]: 2025-05-17 00:31:00.946 [INFO][5071] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-0-241" May 17 00:31:01.003484 containerd[1476]: 2025-05-17 00:31:00.956 [INFO][5071] ipam/ipam.go 511: Trying affinity for 192.168.114.128/26 host="172-232-0-241" May 17 00:31:01.003484 containerd[1476]: 2025-05-17 00:31:00.958 [INFO][5071] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.128/26 host="172-232-0-241" May 17 00:31:01.003484 containerd[1476]: 2025-05-17 00:31:00.961 [INFO][5071] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.128/26 host="172-232-0-241" May 17 00:31:01.003484 containerd[1476]: 2025-05-17 00:31:00.962 [INFO][5071] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.114.128/26 handle="k8s-pod-network.3430b6a776e3c50bb388d2267fb80c3edd4fc6178e70a9f248ed9d7b230af993" host="172-232-0-241" May 17 00:31:01.003484 containerd[1476]: 2025-05-17 00:31:00.966 [INFO][5071] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3430b6a776e3c50bb388d2267fb80c3edd4fc6178e70a9f248ed9d7b230af993 May 17 00:31:01.003484 containerd[1476]: 2025-05-17 00:31:00.972 [INFO][5071] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.114.128/26 handle="k8s-pod-network.3430b6a776e3c50bb388d2267fb80c3edd4fc6178e70a9f248ed9d7b230af993" host="172-232-0-241" May 17 00:31:01.003484 containerd[1476]: 2025-05-17 00:31:00.978 [INFO][5071] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.114.137/26] block=192.168.114.128/26 handle="k8s-pod-network.3430b6a776e3c50bb388d2267fb80c3edd4fc6178e70a9f248ed9d7b230af993" host="172-232-0-241" May 17 00:31:01.003484 containerd[1476]: 2025-05-17 00:31:00.978 [INFO][5071] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.137/26] handle="k8s-pod-network.3430b6a776e3c50bb388d2267fb80c3edd4fc6178e70a9f248ed9d7b230af993" host="172-232-0-241" May 17 00:31:01.003484 containerd[1476]: 2025-05-17 00:31:00.978 [INFO][5071] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:31:01.003484 containerd[1476]: 2025-05-17 00:31:00.978 [INFO][5071] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.114.137/26] IPv6=[] ContainerID="3430b6a776e3c50bb388d2267fb80c3edd4fc6178e70a9f248ed9d7b230af993" HandleID="k8s-pod-network.3430b6a776e3c50bb388d2267fb80c3edd4fc6178e70a9f248ed9d7b230af993" Workload="172--232--0--241-k8s-csi--node--driver--h9kj7-eth0" May 17 00:31:01.004065 containerd[1476]: 2025-05-17 00:31:00.983 [INFO][5063] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3430b6a776e3c50bb388d2267fb80c3edd4fc6178e70a9f248ed9d7b230af993" Namespace="calico-system" Pod="csi-node-driver-h9kj7" WorkloadEndpoint="172--232--0--241-k8s-csi--node--driver--h9kj7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--0--241-k8s-csi--node--driver--h9kj7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0996e84d-dd0b-49e3-addd-0931e48a258e", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 30, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-0-241", ContainerID:"", Pod:"csi-node-driver-h9kj7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.114.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic47f256aa13", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:31:01.004065 containerd[1476]: 2025-05-17 00:31:00.983 [INFO][5063] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.137/32] ContainerID="3430b6a776e3c50bb388d2267fb80c3edd4fc6178e70a9f248ed9d7b230af993" Namespace="calico-system" Pod="csi-node-driver-h9kj7" WorkloadEndpoint="172--232--0--241-k8s-csi--node--driver--h9kj7-eth0" May 17 00:31:01.004065 containerd[1476]: 2025-05-17 00:31:00.983 [INFO][5063] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic47f256aa13 ContainerID="3430b6a776e3c50bb388d2267fb80c3edd4fc6178e70a9f248ed9d7b230af993" Namespace="calico-system" Pod="csi-node-driver-h9kj7" WorkloadEndpoint="172--232--0--241-k8s-csi--node--driver--h9kj7-eth0" May 17 00:31:01.004065 containerd[1476]: 2025-05-17 00:31:00.987 [INFO][5063] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3430b6a776e3c50bb388d2267fb80c3edd4fc6178e70a9f248ed9d7b230af993" Namespace="calico-system" Pod="csi-node-driver-h9kj7" WorkloadEndpoint="172--232--0--241-k8s-csi--node--driver--h9kj7-eth0" May 17 00:31:01.004065 containerd[1476]: 2025-05-17 00:31:00.987 [INFO][5063] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3430b6a776e3c50bb388d2267fb80c3edd4fc6178e70a9f248ed9d7b230af993" Namespace="calico-system" Pod="csi-node-driver-h9kj7" WorkloadEndpoint="172--232--0--241-k8s-csi--node--driver--h9kj7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--0--241-k8s-csi--node--driver--h9kj7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0996e84d-dd0b-49e3-addd-0931e48a258e", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 30, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-0-241", ContainerID:"3430b6a776e3c50bb388d2267fb80c3edd4fc6178e70a9f248ed9d7b230af993", Pod:"csi-node-driver-h9kj7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.114.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic47f256aa13", MAC:"f6:77:d7:38:e5:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:31:01.004065 containerd[1476]: 2025-05-17 00:31:00.995 [INFO][5063] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3430b6a776e3c50bb388d2267fb80c3edd4fc6178e70a9f248ed9d7b230af993" Namespace="calico-system" Pod="csi-node-driver-h9kj7" WorkloadEndpoint="172--232--0--241-k8s-csi--node--driver--h9kj7-eth0" May 17 00:31:01.022998 containerd[1476]: time="2025-05-17T00:31:01.022732832Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:31:01.022998 containerd[1476]: time="2025-05-17T00:31:01.022875023Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:31:01.023396 containerd[1476]: time="2025-05-17T00:31:01.023333454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:31:01.023671 containerd[1476]: time="2025-05-17T00:31:01.023504694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:31:01.046602 systemd[1]: Started cri-containerd-3430b6a776e3c50bb388d2267fb80c3edd4fc6178e70a9f248ed9d7b230af993.scope - libcontainer container 3430b6a776e3c50bb388d2267fb80c3edd4fc6178e70a9f248ed9d7b230af993. May 17 00:31:01.097335 containerd[1476]: time="2025-05-17T00:31:01.097296161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h9kj7,Uid:0996e84d-dd0b-49e3-addd-0931e48a258e,Namespace:calico-system,Attempt:1,} returns sandbox id \"3430b6a776e3c50bb388d2267fb80c3edd4fc6178e70a9f248ed9d7b230af993\"" May 17 00:31:01.908278 kubelet[2529]: E0517 00:31:01.908232 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:31:01.910769 kubelet[2529]: I0517 00:31:01.910298 2529 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:31:01.910769 kubelet[2529]: I0517 00:31:01.910607 2529 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:31:01.910992 kubelet[2529]: I0517 00:31:01.910940 2529 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:31:02.000952 containerd[1476]: time="2025-05-17T00:31:02.000860657Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:31:02.002303 containerd[1476]: time="2025-05-17T00:31:02.002050720Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.0: active requests=0, bytes read=51178512" May 17 00:31:02.002802 containerd[1476]: time="2025-05-17T00:31:02.002771772Z" level=info msg="ImageCreate event name:\"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:31:02.004991 containerd[1476]: time="2025-05-17T00:31:02.004940628Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:31:02.005868 containerd[1476]: time="2025-05-17T00:31:02.005837400Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" with image id \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\", size \"52671183\" in 1.579265592s" May 17 00:31:02.005908 containerd[1476]: time="2025-05-17T00:31:02.005885800Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\"" May 17 00:31:02.011342 containerd[1476]: time="2025-05-17T00:31:02.007515065Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 17 00:31:02.022694 containerd[1476]: time="2025-05-17T00:31:02.022675914Z" level=info msg="CreateContainer within sandbox \"2db6fd5bc05eab87414552b4c4e505d5a6f18813d4634992c73f4122e73d4738\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 17 00:31:02.030460 containerd[1476]: time="2025-05-17T00:31:02.030093814Z" level=info msg="CreateContainer within sandbox \"2db6fd5bc05eab87414552b4c4e505d5a6f18813d4634992c73f4122e73d4738\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"2ca5c69bc4059e9420a5f5ff12df96d0fa39a50d7a10a2b320a0ed8e6bb8d7d6\"" May 17 00:31:02.032119 containerd[1476]: time="2025-05-17T00:31:02.031814798Z" level=info msg="StartContainer for \"2ca5c69bc4059e9420a5f5ff12df96d0fa39a50d7a10a2b320a0ed8e6bb8d7d6\"" May 17 00:31:02.066528 systemd[1]: Started cri-containerd-2ca5c69bc4059e9420a5f5ff12df96d0fa39a50d7a10a2b320a0ed8e6bb8d7d6.scope - libcontainer container 2ca5c69bc4059e9420a5f5ff12df96d0fa39a50d7a10a2b320a0ed8e6bb8d7d6. May 17 00:31:02.100478 containerd[1476]: time="2025-05-17T00:31:02.100369998Z" level=info msg="StartContainer for \"2ca5c69bc4059e9420a5f5ff12df96d0fa39a50d7a10a2b320a0ed8e6bb8d7d6\" returns successfully" May 17 00:31:02.433584 systemd-networkd[1399]: calic47f256aa13: Gained IPv6LL May 17 00:31:02.917743 kubelet[2529]: I0517 00:31:02.917702 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-96dc47b75-xvwdn" podStartSLOduration=21.040006413 podStartE2EDuration="23.917688461s" podCreationTimestamp="2025-05-17 00:30:39 +0000 UTC" firstStartedPulling="2025-05-17 00:30:59.129196745 +0000 UTC m=+36.529686546" lastFinishedPulling="2025-05-17 00:31:02.006878793 +0000 UTC m=+39.407368594" observedRunningTime="2025-05-17 00:31:02.91729948 +0000 UTC m=+40.317789291" watchObservedRunningTime="2025-05-17 00:31:02.917688461 +0000 UTC m=+40.318178262" May 17 00:31:03.070904 containerd[1476]: time="2025-05-17T00:31:03.070842161Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:31:03.071764 containerd[1476]: time="2025-05-17T00:31:03.071722443Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8758390" May 17 00:31:03.072175 containerd[1476]: time="2025-05-17T00:31:03.072135874Z" level=info msg="ImageCreate event name:\"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:31:03.073558 containerd[1476]: time="2025-05-17T00:31:03.073526757Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:31:03.074287 containerd[1476]: time="2025-05-17T00:31:03.074158909Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"10251093\" in 1.066618104s" May 17 00:31:03.074287 containerd[1476]: time="2025-05-17T00:31:03.074192289Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\"" May 17 00:31:03.077096 containerd[1476]: time="2025-05-17T00:31:03.077073546Z" level=info msg="CreateContainer within sandbox \"3430b6a776e3c50bb388d2267fb80c3edd4fc6178e70a9f248ed9d7b230af993\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 17 00:31:03.090586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2266436208.mount: Deactivated successfully. May 17 00:31:03.091473 containerd[1476]: time="2025-05-17T00:31:03.091123481Z" level=info msg="CreateContainer within sandbox \"3430b6a776e3c50bb388d2267fb80c3edd4fc6178e70a9f248ed9d7b230af993\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"6ea224b232ea17624ff565bda87784e9646164c9764bb9c9d9402ab28ac5b6ce\"" May 17 00:31:03.093367 containerd[1476]: time="2025-05-17T00:31:03.092390254Z" level=info msg="StartContainer for \"6ea224b232ea17624ff565bda87784e9646164c9764bb9c9d9402ab28ac5b6ce\"" May 17 00:31:03.122527 systemd[1]: Started cri-containerd-6ea224b232ea17624ff565bda87784e9646164c9764bb9c9d9402ab28ac5b6ce.scope - libcontainer container 6ea224b232ea17624ff565bda87784e9646164c9764bb9c9d9402ab28ac5b6ce. May 17 00:31:03.144253 containerd[1476]: time="2025-05-17T00:31:03.144213041Z" level=info msg="StartContainer for \"6ea224b232ea17624ff565bda87784e9646164c9764bb9c9d9402ab28ac5b6ce\" returns successfully" May 17 00:31:03.146120 containerd[1476]: time="2025-05-17T00:31:03.146098186Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 17 00:31:03.763453 systemd[1]: run-containerd-runc-k8s.io-6ea224b232ea17624ff565bda87784e9646164c9764bb9c9d9402ab28ac5b6ce-runc.fKHVdj.mount: Deactivated successfully. May 17 00:31:03.913321 kubelet[2529]: I0517 00:31:03.913298 2529 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:31:04.583708 kubelet[2529]: I0517 00:31:04.583317 2529 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:31:05.576756 containerd[1476]: time="2025-05-17T00:31:05.576687836Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:31:05.577840 containerd[1476]: time="2025-05-17T00:31:05.577616208Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=14705639" May 17 00:31:05.579462 containerd[1476]: time="2025-05-17T00:31:05.578225679Z" level=info msg="ImageCreate event name:\"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:31:05.580717 containerd[1476]: time="2025-05-17T00:31:05.579855573Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:31:05.580717 containerd[1476]: time="2025-05-17T00:31:05.580562824Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"16198294\" in 2.434436038s" May 17 00:31:05.580717 containerd[1476]: time="2025-05-17T00:31:05.580602564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\"" May 17 00:31:05.582162 containerd[1476]: time="2025-05-17T00:31:05.582111398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:31:05.584614 containerd[1476]: time="2025-05-17T00:31:05.584478143Z" level=info msg="CreateContainer within sandbox \"3430b6a776e3c50bb388d2267fb80c3edd4fc6178e70a9f248ed9d7b230af993\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 17 00:31:05.597638 containerd[1476]: time="2025-05-17T00:31:05.597601061Z" level=info msg="CreateContainer within sandbox \"3430b6a776e3c50bb388d2267fb80c3edd4fc6178e70a9f248ed9d7b230af993\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f52cdd854903cc9f6a4b462b75ba106e800cbf2e281a835ffeaaabdd17a53bb3\"" May 17 00:31:05.598898 containerd[1476]: time="2025-05-17T00:31:05.598873834Z" level=info msg="StartContainer for \"f52cdd854903cc9f6a4b462b75ba106e800cbf2e281a835ffeaaabdd17a53bb3\"" May 17 00:31:05.641685 systemd[1]: Started cri-containerd-f52cdd854903cc9f6a4b462b75ba106e800cbf2e281a835ffeaaabdd17a53bb3.scope - libcontainer container f52cdd854903cc9f6a4b462b75ba106e800cbf2e281a835ffeaaabdd17a53bb3. May 17 00:31:05.689129 containerd[1476]: time="2025-05-17T00:31:05.689090039Z" level=info msg="StartContainer for \"f52cdd854903cc9f6a4b462b75ba106e800cbf2e281a835ffeaaabdd17a53bb3\" returns successfully" May 17 00:31:05.744952 containerd[1476]: time="2025-05-17T00:31:05.744909179Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:31:05.745951 containerd[1476]: time="2025-05-17T00:31:05.745916791Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:31:05.746049 containerd[1476]: time="2025-05-17T00:31:05.745983192Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:31:05.746183 kubelet[2529]: E0517 00:31:05.746127 2529 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:31:05.746183 kubelet[2529]: E0517 00:31:05.746169 2529 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:31:05.747119 kubelet[2529]: E0517 00:31:05.747054 2529 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:be8615eacac5472da34b065a5f473380,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hkjvk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-767b6d8985-vppnt_calico-system(a77cac63-6e4c-448a-ad97-4b194bdcbe50): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:31:05.749543 containerd[1476]: time="2025-05-17T00:31:05.749513859Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:31:05.772709 kubelet[2529]: I0517 00:31:05.772673 2529 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 17 00:31:05.772709 kubelet[2529]: I0517 00:31:05.772702 2529 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 17 00:31:05.874365 containerd[1476]: time="2025-05-17T00:31:05.874316989Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:31:05.875538 containerd[1476]: time="2025-05-17T00:31:05.875497281Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:31:05.875992 containerd[1476]: time="2025-05-17T00:31:05.875582301Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:31:05.876038 kubelet[2529]: E0517 00:31:05.875728 2529 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:31:05.876038 kubelet[2529]: E0517 00:31:05.875782 2529 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:31:05.876038 kubelet[2529]: E0517 00:31:05.875928 2529 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hkjvk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-767b6d8985-vppnt_calico-system(a77cac63-6e4c-448a-ad97-4b194bdcbe50): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:31:05.877523 kubelet[2529]: E0517 00:31:05.877388 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-767b6d8985-vppnt" podUID="a77cac63-6e4c-448a-ad97-4b194bdcbe50" May 17 00:31:05.936821 kubelet[2529]: I0517 00:31:05.936085 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-h9kj7" podStartSLOduration=22.4530047 podStartE2EDuration="26.936070852s" podCreationTimestamp="2025-05-17 00:30:39 +0000 UTC" firstStartedPulling="2025-05-17 00:31:01.098686405 +0000 UTC m=+38.499176196" lastFinishedPulling="2025-05-17 00:31:05.581752557 +0000 UTC m=+42.982242348" observedRunningTime="2025-05-17 00:31:05.9353267 +0000 UTC m=+43.335816501" watchObservedRunningTime="2025-05-17 00:31:05.936070852 +0000 UTC m=+43.336560653" May 17 00:31:07.012080 kubelet[2529]: I0517 00:31:07.011857 2529 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:31:08.895216 kubelet[2529]: I0517 00:31:08.894974 2529 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:31:08.926971 kubelet[2529]: I0517 00:31:08.926041 2529 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:31:08.927082 containerd[1476]: time="2025-05-17T00:31:08.926812673Z" level=info msg="StopContainer for \"2f947d674d9dfc795964812d5ce799be1215fbf530f94d6c1ff5ab080474e145\" with timeout 30 (s)" May 17 00:31:08.927614 containerd[1476]: time="2025-05-17T00:31:08.927580815Z" level=info msg="Stop container \"2f947d674d9dfc795964812d5ce799be1215fbf530f94d6c1ff5ab080474e145\" with signal terminated" May 17 00:31:08.971596 systemd[1]: Created slice kubepods-besteffort-poda5b23b65_a532_41bb_9644_b86758d7a0bf.slice - libcontainer container kubepods-besteffort-poda5b23b65_a532_41bb_9644_b86758d7a0bf.slice. May 17 00:31:08.973308 systemd[1]: cri-containerd-2f947d674d9dfc795964812d5ce799be1215fbf530f94d6c1ff5ab080474e145.scope: Deactivated successfully. May 17 00:31:08.994465 kubelet[2529]: I0517 00:31:08.994263 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a5b23b65-a532-41bb-9644-b86758d7a0bf-calico-apiserver-certs\") pod \"calico-apiserver-59c6b49969-94hh4\" (UID: \"a5b23b65-a532-41bb-9644-b86758d7a0bf\") " pod="calico-apiserver/calico-apiserver-59c6b49969-94hh4" May 17 00:31:08.994465 kubelet[2529]: I0517 00:31:08.994303 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvx8x\" (UniqueName: \"kubernetes.io/projected/a5b23b65-a532-41bb-9644-b86758d7a0bf-kube-api-access-lvx8x\") pod \"calico-apiserver-59c6b49969-94hh4\" (UID: \"a5b23b65-a532-41bb-9644-b86758d7a0bf\") " pod="calico-apiserver/calico-apiserver-59c6b49969-94hh4" May 17 00:31:09.004091 containerd[1476]: time="2025-05-17T00:31:09.004039540Z" level=info msg="shim disconnected" id=2f947d674d9dfc795964812d5ce799be1215fbf530f94d6c1ff5ab080474e145 namespace=k8s.io May 17 00:31:09.004091 containerd[1476]: time="2025-05-17T00:31:09.004088390Z" level=warning msg="cleaning up after shim disconnected" id=2f947d674d9dfc795964812d5ce799be1215fbf530f94d6c1ff5ab080474e145 namespace=k8s.io May 17 00:31:09.004091 containerd[1476]: time="2025-05-17T00:31:09.004095570Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:31:09.004816 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f947d674d9dfc795964812d5ce799be1215fbf530f94d6c1ff5ab080474e145-rootfs.mount: Deactivated successfully. May 17 00:31:09.032558 kubelet[2529]: I0517 00:31:09.032531 2529 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:31:09.033801 kubelet[2529]: E0517 00:31:09.033668 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:31:09.061659 containerd[1476]: time="2025-05-17T00:31:09.061621476Z" level=info msg="StopContainer for \"2f947d674d9dfc795964812d5ce799be1215fbf530f94d6c1ff5ab080474e145\" returns successfully" May 17 00:31:09.062349 containerd[1476]: time="2025-05-17T00:31:09.062209887Z" level=info msg="StopPodSandbox for \"65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d\"" May 17 00:31:09.062349 containerd[1476]: time="2025-05-17T00:31:09.062243647Z" level=info msg="Container to stop \"2f947d674d9dfc795964812d5ce799be1215fbf530f94d6c1ff5ab080474e145\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:31:09.071614 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d-shm.mount: Deactivated successfully. May 17 00:31:09.074187 systemd[1]: cri-containerd-65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d.scope: Deactivated successfully. May 17 00:31:09.091559 containerd[1476]: time="2025-05-17T00:31:09.091515956Z" level=info msg="shim disconnected" id=65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d namespace=k8s.io May 17 00:31:09.093364 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d-rootfs.mount: Deactivated successfully. May 17 00:31:09.093704 containerd[1476]: time="2025-05-17T00:31:09.093581130Z" level=warning msg="cleaning up after shim disconnected" id=65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d namespace=k8s.io May 17 00:31:09.093704 containerd[1476]: time="2025-05-17T00:31:09.093596780Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:31:09.159014 systemd-networkd[1399]: calid28495967b6: Link DOWN May 17 00:31:09.159023 systemd-networkd[1399]: calid28495967b6: Lost carrier May 17 00:31:09.237654 containerd[1476]: 2025-05-17 00:31:09.156 [INFO][5530] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d" May 17 00:31:09.237654 containerd[1476]: 2025-05-17 00:31:09.157 [INFO][5530] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d" iface="eth0" netns="/var/run/netns/cni-9624e242-18fc-ef9e-c07e-4bb6bc794298" May 17 00:31:09.237654 containerd[1476]: 2025-05-17 00:31:09.157 [INFO][5530] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d" iface="eth0" netns="/var/run/netns/cni-9624e242-18fc-ef9e-c07e-4bb6bc794298" May 17 00:31:09.237654 containerd[1476]: 2025-05-17 00:31:09.168 [INFO][5530] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d" after=11.255169ms iface="eth0" netns="/var/run/netns/cni-9624e242-18fc-ef9e-c07e-4bb6bc794298" May 17 00:31:09.237654 containerd[1476]: 2025-05-17 00:31:09.168 [INFO][5530] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d" May 17 00:31:09.237654 containerd[1476]: 2025-05-17 00:31:09.168 [INFO][5530] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d" May 17 00:31:09.237654 containerd[1476]: 2025-05-17 00:31:09.209 [INFO][5539] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d" HandleID="k8s-pod-network.65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d" Workload="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--chqjq-eth0" May 17 00:31:09.237654 containerd[1476]: 2025-05-17 00:31:09.209 [INFO][5539] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:31:09.237654 containerd[1476]: 2025-05-17 00:31:09.209 [INFO][5539] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:31:09.237654 containerd[1476]: 2025-05-17 00:31:09.232 [INFO][5539] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d" HandleID="k8s-pod-network.65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d" Workload="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--chqjq-eth0" May 17 00:31:09.237654 containerd[1476]: 2025-05-17 00:31:09.232 [INFO][5539] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d" HandleID="k8s-pod-network.65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d" Workload="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--chqjq-eth0" May 17 00:31:09.237654 containerd[1476]: 2025-05-17 00:31:09.233 [INFO][5539] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:31:09.237654 containerd[1476]: 2025-05-17 00:31:09.235 [INFO][5530] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d" May 17 00:31:09.238109 containerd[1476]: time="2025-05-17T00:31:09.237952561Z" level=info msg="TearDown network for sandbox \"65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d\" successfully" May 17 00:31:09.238109 containerd[1476]: time="2025-05-17T00:31:09.238022511Z" level=info msg="StopPodSandbox for \"65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d\" returns successfully" May 17 00:31:09.238506 containerd[1476]: time="2025-05-17T00:31:09.238491862Z" level=info msg="StopPodSandbox for \"ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe\"" May 17 00:31:09.281405 containerd[1476]: time="2025-05-17T00:31:09.280618312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59c6b49969-94hh4,Uid:a5b23b65-a532-41bb-9644-b86758d7a0bf,Namespace:calico-apiserver,Attempt:0,}" May 17 00:31:09.292537 containerd[1476]: 2025-05-17 00:31:09.262 [WARNING][5554] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--0--241-k8s-calico--apiserver--7cf648ccbb--chqjq-eth0", GenerateName:"calico-apiserver-7cf648ccbb-", Namespace:"calico-apiserver", SelfLink:"", UID:"16a8edb6-95df-4bc2-a130-7cc52db94763", ResourceVersion:"1167", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 30, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cf648ccbb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-0-241", ContainerID:"65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d", Pod:"calico-apiserver-7cf648ccbb-chqjq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid28495967b6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:31:09.292537 containerd[1476]: 2025-05-17 00:31:09.263 [INFO][5554] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe" May 17 00:31:09.292537 containerd[1476]: 2025-05-17 00:31:09.263 [INFO][5554] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe" iface="eth0" netns="" May 17 00:31:09.292537 containerd[1476]: 2025-05-17 00:31:09.263 [INFO][5554] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe" May 17 00:31:09.292537 containerd[1476]: 2025-05-17 00:31:09.263 [INFO][5554] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe" May 17 00:31:09.292537 containerd[1476]: 2025-05-17 00:31:09.281 [INFO][5561] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe" HandleID="k8s-pod-network.ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe" Workload="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--chqjq-eth0" May 17 00:31:09.292537 containerd[1476]: 2025-05-17 00:31:09.281 [INFO][5561] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:31:09.292537 containerd[1476]: 2025-05-17 00:31:09.281 [INFO][5561] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:31:09.292537 containerd[1476]: 2025-05-17 00:31:09.286 [WARNING][5561] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe" HandleID="k8s-pod-network.ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe" Workload="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--chqjq-eth0" May 17 00:31:09.292537 containerd[1476]: 2025-05-17 00:31:09.286 [INFO][5561] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe" HandleID="k8s-pod-network.ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe" Workload="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--chqjq-eth0" May 17 00:31:09.292537 containerd[1476]: 2025-05-17 00:31:09.287 [INFO][5561] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:31:09.292537 containerd[1476]: 2025-05-17 00:31:09.289 [INFO][5554] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe" May 17 00:31:09.295378 containerd[1476]: time="2025-05-17T00:31:09.292561202Z" level=info msg="TearDown network for sandbox \"ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe\" successfully" May 17 00:31:09.295378 containerd[1476]: time="2025-05-17T00:31:09.292574572Z" level=info msg="StopPodSandbox for \"ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe\" returns successfully" May 17 00:31:09.398350 kubelet[2529]: I0517 00:31:09.398315 2529 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2znr\" (UniqueName: \"kubernetes.io/projected/16a8edb6-95df-4bc2-a130-7cc52db94763-kube-api-access-q2znr\") pod \"16a8edb6-95df-4bc2-a130-7cc52db94763\" (UID: \"16a8edb6-95df-4bc2-a130-7cc52db94763\") " May 17 00:31:09.398916 kubelet[2529]: I0517 00:31:09.398902 2529 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/16a8edb6-95df-4bc2-a130-7cc52db94763-calico-apiserver-certs\") pod \"16a8edb6-95df-4bc2-a130-7cc52db94763\" (UID: \"16a8edb6-95df-4bc2-a130-7cc52db94763\") " May 17 00:31:09.402069 kubelet[2529]: I0517 00:31:09.402048 2529 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16a8edb6-95df-4bc2-a130-7cc52db94763-kube-api-access-q2znr" (OuterVolumeSpecName: "kube-api-access-q2znr") pod "16a8edb6-95df-4bc2-a130-7cc52db94763" (UID: "16a8edb6-95df-4bc2-a130-7cc52db94763"). InnerVolumeSpecName "kube-api-access-q2znr". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:31:09.404288 kubelet[2529]: I0517 00:31:09.404246 2529 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16a8edb6-95df-4bc2-a130-7cc52db94763-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "16a8edb6-95df-4bc2-a130-7cc52db94763" (UID: "16a8edb6-95df-4bc2-a130-7cc52db94763"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 17 00:31:09.461245 systemd-networkd[1399]: cali1945c7b27fc: Link UP May 17 00:31:09.463697 systemd-networkd[1399]: cali1945c7b27fc: Gained carrier May 17 00:31:09.476409 containerd[1476]: 2025-05-17 00:31:09.316 [INFO][5568] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:31:09.476409 containerd[1476]: 2025-05-17 00:31:09.326 [INFO][5568] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--0--241-k8s-calico--apiserver--59c6b49969--94hh4-eth0 calico-apiserver-59c6b49969- calico-apiserver a5b23b65-a532-41bb-9644-b86758d7a0bf 1157 0 2025-05-17 00:31:08 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59c6b49969 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-232-0-241 calico-apiserver-59c6b49969-94hh4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1945c7b27fc [] [] }} ContainerID="3c273ebccb2df58236f740235e359b44e2372d50b95eedd4df8d7e22bc63bbbc" Namespace="calico-apiserver" Pod="calico-apiserver-59c6b49969-94hh4" WorkloadEndpoint="172--232--0--241-k8s-calico--apiserver--59c6b49969--94hh4-" May 17 00:31:09.476409 containerd[1476]: 2025-05-17 00:31:09.326 [INFO][5568] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3c273ebccb2df58236f740235e359b44e2372d50b95eedd4df8d7e22bc63bbbc" Namespace="calico-apiserver" Pod="calico-apiserver-59c6b49969-94hh4" WorkloadEndpoint="172--232--0--241-k8s-calico--apiserver--59c6b49969--94hh4-eth0" May 17 00:31:09.476409 containerd[1476]: 2025-05-17 00:31:09.350 [INFO][5585] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3c273ebccb2df58236f740235e359b44e2372d50b95eedd4df8d7e22bc63bbbc" HandleID="k8s-pod-network.3c273ebccb2df58236f740235e359b44e2372d50b95eedd4df8d7e22bc63bbbc" Workload="172--232--0--241-k8s-calico--apiserver--59c6b49969--94hh4-eth0" May 17 00:31:09.476409 containerd[1476]: 2025-05-17 00:31:09.350 [INFO][5585] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3c273ebccb2df58236f740235e359b44e2372d50b95eedd4df8d7e22bc63bbbc" HandleID="k8s-pod-network.3c273ebccb2df58236f740235e359b44e2372d50b95eedd4df8d7e22bc63bbbc" Workload="172--232--0--241-k8s-calico--apiserver--59c6b49969--94hh4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000235240), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-232-0-241", "pod":"calico-apiserver-59c6b49969-94hh4", "timestamp":"2025-05-17 00:31:09.350026168 +0000 UTC"}, Hostname:"172-232-0-241", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:31:09.476409 containerd[1476]: 2025-05-17 00:31:09.350 [INFO][5585] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:31:09.476409 containerd[1476]: 2025-05-17 00:31:09.350 [INFO][5585] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:31:09.476409 containerd[1476]: 2025-05-17 00:31:09.350 [INFO][5585] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-0-241' May 17 00:31:09.476409 containerd[1476]: 2025-05-17 00:31:09.357 [INFO][5585] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3c273ebccb2df58236f740235e359b44e2372d50b95eedd4df8d7e22bc63bbbc" host="172-232-0-241" May 17 00:31:09.476409 containerd[1476]: 2025-05-17 00:31:09.433 [INFO][5585] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-0-241" May 17 00:31:09.476409 containerd[1476]: 2025-05-17 00:31:09.438 [INFO][5585] ipam/ipam.go 511: Trying affinity for 192.168.114.128/26 host="172-232-0-241" May 17 00:31:09.476409 containerd[1476]: 2025-05-17 00:31:09.442 [INFO][5585] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.128/26 host="172-232-0-241" May 17 00:31:09.476409 containerd[1476]: 2025-05-17 00:31:09.444 [INFO][5585] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.128/26 host="172-232-0-241" May 17 00:31:09.476409 containerd[1476]: 2025-05-17 00:31:09.444 [INFO][5585] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.114.128/26 handle="k8s-pod-network.3c273ebccb2df58236f740235e359b44e2372d50b95eedd4df8d7e22bc63bbbc" host="172-232-0-241" May 17 00:31:09.476409 containerd[1476]: 2025-05-17 00:31:09.446 [INFO][5585] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3c273ebccb2df58236f740235e359b44e2372d50b95eedd4df8d7e22bc63bbbc May 17 00:31:09.476409 containerd[1476]: 2025-05-17 00:31:09.448 [INFO][5585] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.114.128/26 handle="k8s-pod-network.3c273ebccb2df58236f740235e359b44e2372d50b95eedd4df8d7e22bc63bbbc" host="172-232-0-241" May 17 00:31:09.476409 containerd[1476]: 2025-05-17 00:31:09.453 [INFO][5585] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.114.138/26] block=192.168.114.128/26 handle="k8s-pod-network.3c273ebccb2df58236f740235e359b44e2372d50b95eedd4df8d7e22bc63bbbc" host="172-232-0-241" May 17 00:31:09.476409 containerd[1476]: 2025-05-17 00:31:09.453 [INFO][5585] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.138/26] handle="k8s-pod-network.3c273ebccb2df58236f740235e359b44e2372d50b95eedd4df8d7e22bc63bbbc" host="172-232-0-241" May 17 00:31:09.476409 containerd[1476]: 2025-05-17 00:31:09.453 [INFO][5585] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:31:09.476409 containerd[1476]: 2025-05-17 00:31:09.453 [INFO][5585] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.114.138/26] IPv6=[] ContainerID="3c273ebccb2df58236f740235e359b44e2372d50b95eedd4df8d7e22bc63bbbc" HandleID="k8s-pod-network.3c273ebccb2df58236f740235e359b44e2372d50b95eedd4df8d7e22bc63bbbc" Workload="172--232--0--241-k8s-calico--apiserver--59c6b49969--94hh4-eth0" May 17 00:31:09.476912 containerd[1476]: 2025-05-17 00:31:09.456 [INFO][5568] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3c273ebccb2df58236f740235e359b44e2372d50b95eedd4df8d7e22bc63bbbc" Namespace="calico-apiserver" Pod="calico-apiserver-59c6b49969-94hh4" WorkloadEndpoint="172--232--0--241-k8s-calico--apiserver--59c6b49969--94hh4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--0--241-k8s-calico--apiserver--59c6b49969--94hh4-eth0", GenerateName:"calico-apiserver-59c6b49969-", Namespace:"calico-apiserver", SelfLink:"", UID:"a5b23b65-a532-41bb-9644-b86758d7a0bf", ResourceVersion:"1157", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 31, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59c6b49969", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-0-241", ContainerID:"", Pod:"calico-apiserver-59c6b49969-94hh4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1945c7b27fc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:31:09.476912 containerd[1476]: 2025-05-17 00:31:09.456 [INFO][5568] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.138/32] ContainerID="3c273ebccb2df58236f740235e359b44e2372d50b95eedd4df8d7e22bc63bbbc" Namespace="calico-apiserver" Pod="calico-apiserver-59c6b49969-94hh4" WorkloadEndpoint="172--232--0--241-k8s-calico--apiserver--59c6b49969--94hh4-eth0" May 17 00:31:09.476912 containerd[1476]: 2025-05-17 00:31:09.456 [INFO][5568] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1945c7b27fc ContainerID="3c273ebccb2df58236f740235e359b44e2372d50b95eedd4df8d7e22bc63bbbc" Namespace="calico-apiserver" Pod="calico-apiserver-59c6b49969-94hh4" WorkloadEndpoint="172--232--0--241-k8s-calico--apiserver--59c6b49969--94hh4-eth0" May 17 00:31:09.476912 containerd[1476]: 2025-05-17 00:31:09.462 [INFO][5568] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3c273ebccb2df58236f740235e359b44e2372d50b95eedd4df8d7e22bc63bbbc" Namespace="calico-apiserver" Pod="calico-apiserver-59c6b49969-94hh4" WorkloadEndpoint="172--232--0--241-k8s-calico--apiserver--59c6b49969--94hh4-eth0" May 17 00:31:09.476912 containerd[1476]: 2025-05-17 00:31:09.463 [INFO][5568] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3c273ebccb2df58236f740235e359b44e2372d50b95eedd4df8d7e22bc63bbbc" Namespace="calico-apiserver" Pod="calico-apiserver-59c6b49969-94hh4" WorkloadEndpoint="172--232--0--241-k8s-calico--apiserver--59c6b49969--94hh4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--0--241-k8s-calico--apiserver--59c6b49969--94hh4-eth0", GenerateName:"calico-apiserver-59c6b49969-", Namespace:"calico-apiserver", SelfLink:"", UID:"a5b23b65-a532-41bb-9644-b86758d7a0bf", ResourceVersion:"1157", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 31, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59c6b49969", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-0-241", ContainerID:"3c273ebccb2df58236f740235e359b44e2372d50b95eedd4df8d7e22bc63bbbc", Pod:"calico-apiserver-59c6b49969-94hh4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1945c7b27fc", MAC:"42:7b:5b:b8:79:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:31:09.476912 containerd[1476]: 2025-05-17 00:31:09.472 [INFO][5568] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3c273ebccb2df58236f740235e359b44e2372d50b95eedd4df8d7e22bc63bbbc" Namespace="calico-apiserver" Pod="calico-apiserver-59c6b49969-94hh4" WorkloadEndpoint="172--232--0--241-k8s-calico--apiserver--59c6b49969--94hh4-eth0" May 17 00:31:09.500595 kubelet[2529]: I0517 00:31:09.500003 2529 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2znr\" (UniqueName: \"kubernetes.io/projected/16a8edb6-95df-4bc2-a130-7cc52db94763-kube-api-access-q2znr\") on node \"172-232-0-241\" DevicePath \"\"" May 17 00:31:09.500595 kubelet[2529]: I0517 00:31:09.500032 2529 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/16a8edb6-95df-4bc2-a130-7cc52db94763-calico-apiserver-certs\") on node \"172-232-0-241\" DevicePath \"\"" May 17 00:31:09.500700 containerd[1476]: time="2025-05-17T00:31:09.500451879Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:31:09.500700 containerd[1476]: time="2025-05-17T00:31:09.500490549Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:31:09.500700 containerd[1476]: time="2025-05-17T00:31:09.500499529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:31:09.500700 containerd[1476]: time="2025-05-17T00:31:09.500558529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:31:09.516725 systemd[1]: Started cri-containerd-3c273ebccb2df58236f740235e359b44e2372d50b95eedd4df8d7e22bc63bbbc.scope - libcontainer container 3c273ebccb2df58236f740235e359b44e2372d50b95eedd4df8d7e22bc63bbbc. May 17 00:31:09.521456 kernel: bpftool[5662]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 17 00:31:09.553227 containerd[1476]: time="2025-05-17T00:31:09.553186717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59c6b49969-94hh4,Uid:a5b23b65-a532-41bb-9644-b86758d7a0bf,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"3c273ebccb2df58236f740235e359b44e2372d50b95eedd4df8d7e22bc63bbbc\"" May 17 00:31:09.556335 containerd[1476]: time="2025-05-17T00:31:09.556086871Z" level=info msg="CreateContainer within sandbox \"3c273ebccb2df58236f740235e359b44e2372d50b95eedd4df8d7e22bc63bbbc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:31:09.563834 containerd[1476]: time="2025-05-17T00:31:09.563812924Z" level=info msg="CreateContainer within sandbox \"3c273ebccb2df58236f740235e359b44e2372d50b95eedd4df8d7e22bc63bbbc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1e27ac9ac50baedef764db15226559e88f0fe8d5353e2b7bbde84060d7cbec88\"" May 17 00:31:09.564497 containerd[1476]: time="2025-05-17T00:31:09.564468975Z" level=info msg="StartContainer for \"1e27ac9ac50baedef764db15226559e88f0fe8d5353e2b7bbde84060d7cbec88\"" May 17 00:31:09.597782 systemd[1]: Started cri-containerd-1e27ac9ac50baedef764db15226559e88f0fe8d5353e2b7bbde84060d7cbec88.scope - libcontainer container 1e27ac9ac50baedef764db15226559e88f0fe8d5353e2b7bbde84060d7cbec88. May 17 00:31:09.642327 containerd[1476]: time="2025-05-17T00:31:09.642251485Z" level=info msg="StartContainer for \"1e27ac9ac50baedef764db15226559e88f0fe8d5353e2b7bbde84060d7cbec88\" returns successfully" May 17 00:31:09.799033 systemd-networkd[1399]: vxlan.calico: Link UP May 17 00:31:09.799041 systemd-networkd[1399]: vxlan.calico: Gained carrier May 17 00:31:09.944246 kubelet[2529]: I0517 00:31:09.944212 2529 scope.go:117] "RemoveContainer" containerID="2f947d674d9dfc795964812d5ce799be1215fbf530f94d6c1ff5ab080474e145" May 17 00:31:09.948512 containerd[1476]: time="2025-05-17T00:31:09.948479286Z" level=info msg="RemoveContainer for \"2f947d674d9dfc795964812d5ce799be1215fbf530f94d6c1ff5ab080474e145\"" May 17 00:31:09.949604 kubelet[2529]: E0517 00:31:09.949570 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:31:09.951800 containerd[1476]: time="2025-05-17T00:31:09.951774302Z" level=info msg="RemoveContainer for \"2f947d674d9dfc795964812d5ce799be1215fbf530f94d6c1ff5ab080474e145\" returns successfully" May 17 00:31:09.954323 kubelet[2529]: I0517 00:31:09.954297 2529 scope.go:117] "RemoveContainer" containerID="2f947d674d9dfc795964812d5ce799be1215fbf530f94d6c1ff5ab080474e145" May 17 00:31:09.954967 containerd[1476]: time="2025-05-17T00:31:09.954514236Z" level=error msg="ContainerStatus for \"2f947d674d9dfc795964812d5ce799be1215fbf530f94d6c1ff5ab080474e145\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2f947d674d9dfc795964812d5ce799be1215fbf530f94d6c1ff5ab080474e145\": not found" May 17 00:31:09.955018 kubelet[2529]: E0517 00:31:09.954973 2529 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2f947d674d9dfc795964812d5ce799be1215fbf530f94d6c1ff5ab080474e145\": not found" containerID="2f947d674d9dfc795964812d5ce799be1215fbf530f94d6c1ff5ab080474e145" May 17 00:31:09.955018 kubelet[2529]: I0517 00:31:09.954991 2529 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2f947d674d9dfc795964812d5ce799be1215fbf530f94d6c1ff5ab080474e145"} err="failed to get container status \"2f947d674d9dfc795964812d5ce799be1215fbf530f94d6c1ff5ab080474e145\": rpc error: code = NotFound desc = an error occurred when try to find container \"2f947d674d9dfc795964812d5ce799be1215fbf530f94d6c1ff5ab080474e145\": not found" May 17 00:31:09.956305 systemd[1]: Removed slice kubepods-besteffort-pod16a8edb6_95df_4bc2_a130_7cc52db94763.slice - libcontainer container kubepods-besteffort-pod16a8edb6_95df_4bc2_a130_7cc52db94763.slice. May 17 00:31:09.961307 kubelet[2529]: I0517 00:31:09.961269 2529 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-59c6b49969-94hh4" podStartSLOduration=1.9612618880000001 podStartE2EDuration="1.961261888s" podCreationTimestamp="2025-05-17 00:31:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:31:09.960285886 +0000 UTC m=+47.360775687" watchObservedRunningTime="2025-05-17 00:31:09.961261888 +0000 UTC m=+47.361751689" May 17 00:31:10.011114 systemd[1]: run-netns-cni\x2d9624e242\x2d18fc\x2def9e\x2dc07e\x2d4bb6bc794298.mount: Deactivated successfully. May 17 00:31:10.011209 systemd[1]: var-lib-kubelet-pods-16a8edb6\x2d95df\x2d4bc2\x2da130\x2d7cc52db94763-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq2znr.mount: Deactivated successfully. May 17 00:31:10.011274 systemd[1]: var-lib-kubelet-pods-16a8edb6\x2d95df\x2d4bc2\x2da130\x2d7cc52db94763-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. May 17 00:31:10.695754 kubelet[2529]: I0517 00:31:10.695414 2529 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16a8edb6-95df-4bc2-a130-7cc52db94763" path="/var/lib/kubelet/pods/16a8edb6-95df-4bc2-a130-7cc52db94763/volumes" May 17 00:31:10.952285 kubelet[2529]: I0517 00:31:10.952186 2529 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:31:11.329637 systemd-networkd[1399]: vxlan.calico: Gained IPv6LL May 17 00:31:11.457679 systemd-networkd[1399]: cali1945c7b27fc: Gained IPv6LL May 17 00:31:13.695945 containerd[1476]: time="2025-05-17T00:31:13.695839858Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:31:13.956497 containerd[1476]: time="2025-05-17T00:31:13.956312354Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:31:13.957815 containerd[1476]: time="2025-05-17T00:31:13.957735386Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:31:13.958032 containerd[1476]: time="2025-05-17T00:31:13.957840196Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:31:13.958136 kubelet[2529]: E0517 00:31:13.958074 2529 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:31:13.958136 kubelet[2529]: E0517 00:31:13.958120 2529 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:31:13.959510 kubelet[2529]: E0517 00:31:13.958229 2529 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bnr67,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-s52mw_calico-system(ee80876b-aa39-4375-a4e1-fd4e85f8d3ee): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:31:13.959968 kubelet[2529]: E0517 00:31:13.959595 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-s52mw" podUID="ee80876b-aa39-4375-a4e1-fd4e85f8d3ee" May 17 00:31:20.694624 kubelet[2529]: E0517 00:31:20.694411 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-767b6d8985-vppnt" podUID="a77cac63-6e4c-448a-ad97-4b194bdcbe50" May 17 00:31:22.685727 containerd[1476]: time="2025-05-17T00:31:22.684331091Z" level=info msg="StopPodSandbox for \"5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d\"" May 17 00:31:22.748645 containerd[1476]: 2025-05-17 00:31:22.719 [WARNING][5867] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d" WorkloadEndpoint="172--232--0--241-k8s-whisker--57fb894b7c--5tcq4-eth0" May 17 00:31:22.748645 containerd[1476]: 2025-05-17 00:31:22.719 [INFO][5867] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d" May 17 00:31:22.748645 containerd[1476]: 2025-05-17 00:31:22.719 [INFO][5867] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d" iface="eth0" netns="" May 17 00:31:22.748645 containerd[1476]: 2025-05-17 00:31:22.719 [INFO][5867] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d" May 17 00:31:22.748645 containerd[1476]: 2025-05-17 00:31:22.719 [INFO][5867] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d" May 17 00:31:22.748645 containerd[1476]: 2025-05-17 00:31:22.738 [INFO][5876] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d" HandleID="k8s-pod-network.5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d" Workload="172--232--0--241-k8s-whisker--57fb894b7c--5tcq4-eth0" May 17 00:31:22.748645 containerd[1476]: 2025-05-17 00:31:22.738 [INFO][5876] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:31:22.748645 containerd[1476]: 2025-05-17 00:31:22.738 [INFO][5876] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:31:22.748645 containerd[1476]: 2025-05-17 00:31:22.743 [WARNING][5876] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d" HandleID="k8s-pod-network.5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d" Workload="172--232--0--241-k8s-whisker--57fb894b7c--5tcq4-eth0" May 17 00:31:22.748645 containerd[1476]: 2025-05-17 00:31:22.743 [INFO][5876] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d" HandleID="k8s-pod-network.5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d" Workload="172--232--0--241-k8s-whisker--57fb894b7c--5tcq4-eth0" May 17 00:31:22.748645 containerd[1476]: 2025-05-17 00:31:22.744 [INFO][5876] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:31:22.748645 containerd[1476]: 2025-05-17 00:31:22.746 [INFO][5867] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d" May 17 00:31:22.748645 containerd[1476]: time="2025-05-17T00:31:22.748509057Z" level=info msg="TearDown network for sandbox \"5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d\" successfully" May 17 00:31:22.748645 containerd[1476]: time="2025-05-17T00:31:22.748530167Z" level=info msg="StopPodSandbox for \"5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d\" returns successfully" May 17 00:31:22.749312 containerd[1476]: time="2025-05-17T00:31:22.748959487Z" level=info msg="RemovePodSandbox for \"5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d\"" May 17 00:31:22.749312 containerd[1476]: time="2025-05-17T00:31:22.748988558Z" level=info msg="Forcibly stopping sandbox \"5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d\"" May 17 00:31:22.810457 containerd[1476]: 2025-05-17 00:31:22.782 [WARNING][5890] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d" WorkloadEndpoint="172--232--0--241-k8s-whisker--57fb894b7c--5tcq4-eth0" May 17 00:31:22.810457 containerd[1476]: 2025-05-17 00:31:22.782 [INFO][5890] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d" May 17 00:31:22.810457 containerd[1476]: 2025-05-17 00:31:22.782 [INFO][5890] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d" iface="eth0" netns="" May 17 00:31:22.810457 containerd[1476]: 2025-05-17 00:31:22.782 [INFO][5890] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d" May 17 00:31:22.810457 containerd[1476]: 2025-05-17 00:31:22.782 [INFO][5890] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d" May 17 00:31:22.810457 containerd[1476]: 2025-05-17 00:31:22.798 [INFO][5897] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d" HandleID="k8s-pod-network.5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d" Workload="172--232--0--241-k8s-whisker--57fb894b7c--5tcq4-eth0" May 17 00:31:22.810457 containerd[1476]: 2025-05-17 00:31:22.800 [INFO][5897] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:31:22.810457 containerd[1476]: 2025-05-17 00:31:22.800 [INFO][5897] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:31:22.810457 containerd[1476]: 2025-05-17 00:31:22.804 [WARNING][5897] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d" HandleID="k8s-pod-network.5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d" Workload="172--232--0--241-k8s-whisker--57fb894b7c--5tcq4-eth0" May 17 00:31:22.810457 containerd[1476]: 2025-05-17 00:31:22.804 [INFO][5897] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d" HandleID="k8s-pod-network.5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d" Workload="172--232--0--241-k8s-whisker--57fb894b7c--5tcq4-eth0" May 17 00:31:22.810457 containerd[1476]: 2025-05-17 00:31:22.805 [INFO][5897] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:31:22.810457 containerd[1476]: 2025-05-17 00:31:22.807 [INFO][5890] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d" May 17 00:31:22.810457 containerd[1476]: time="2025-05-17T00:31:22.809544131Z" level=info msg="TearDown network for sandbox \"5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d\" successfully" May 17 00:31:22.813072 containerd[1476]: time="2025-05-17T00:31:22.813033504Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:31:22.813137 containerd[1476]: time="2025-05-17T00:31:22.813094804Z" level=info msg="RemovePodSandbox \"5ab8b9efb8934b0a21ce42430fb72e6b709d6e48044dcd828e3b7fe6848d556d\" returns successfully" May 17 00:31:22.813592 containerd[1476]: time="2025-05-17T00:31:22.813573964Z" level=info msg="StopPodSandbox for \"436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a\"" May 17 00:31:22.879527 containerd[1476]: 2025-05-17 00:31:22.845 [WARNING][5911] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--0--241-k8s-calico--apiserver--7cf648ccbb--wj8jt-eth0", GenerateName:"calico-apiserver-7cf648ccbb-", Namespace:"calico-apiserver", SelfLink:"", UID:"7e0aafb5-c219-4523-9e5b-1fe312a4aa2d", ResourceVersion:"1103", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 30, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cf648ccbb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-0-241", ContainerID:"63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323", Pod:"calico-apiserver-7cf648ccbb-wj8jt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3cd4f335fc4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:31:22.879527 containerd[1476]: 2025-05-17 00:31:22.846 [INFO][5911] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a" May 17 00:31:22.879527 containerd[1476]: 2025-05-17 00:31:22.846 [INFO][5911] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a" iface="eth0" netns="" May 17 00:31:22.879527 containerd[1476]: 2025-05-17 00:31:22.846 [INFO][5911] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a" May 17 00:31:22.879527 containerd[1476]: 2025-05-17 00:31:22.846 [INFO][5911] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a" May 17 00:31:22.879527 containerd[1476]: 2025-05-17 00:31:22.868 [INFO][5919] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a" HandleID="k8s-pod-network.436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a" Workload="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--wj8jt-eth0" May 17 00:31:22.879527 containerd[1476]: 2025-05-17 00:31:22.868 [INFO][5919] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:31:22.879527 containerd[1476]: 2025-05-17 00:31:22.869 [INFO][5919] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:31:22.879527 containerd[1476]: 2025-05-17 00:31:22.874 [WARNING][5919] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a" HandleID="k8s-pod-network.436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a" Workload="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--wj8jt-eth0" May 17 00:31:22.879527 containerd[1476]: 2025-05-17 00:31:22.874 [INFO][5919] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a" HandleID="k8s-pod-network.436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a" Workload="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--wj8jt-eth0" May 17 00:31:22.879527 containerd[1476]: 2025-05-17 00:31:22.875 [INFO][5919] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:31:22.879527 containerd[1476]: 2025-05-17 00:31:22.877 [INFO][5911] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a" May 17 00:31:22.880829 containerd[1476]: time="2025-05-17T00:31:22.879497011Z" level=info msg="TearDown network for sandbox \"436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a\" successfully" May 17 00:31:22.880829 containerd[1476]: time="2025-05-17T00:31:22.879954642Z" level=info msg="StopPodSandbox for \"436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a\" returns successfully" May 17 00:31:22.880829 containerd[1476]: time="2025-05-17T00:31:22.880599732Z" level=info msg="RemovePodSandbox for \"436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a\"" May 17 00:31:22.880829 containerd[1476]: time="2025-05-17T00:31:22.880621522Z" level=info msg="Forcibly stopping sandbox \"436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a\"" May 17 00:31:22.962080 containerd[1476]: 2025-05-17 00:31:22.917 [WARNING][5933] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--0--241-k8s-calico--apiserver--7cf648ccbb--wj8jt-eth0", GenerateName:"calico-apiserver-7cf648ccbb-", Namespace:"calico-apiserver", SelfLink:"", UID:"7e0aafb5-c219-4523-9e5b-1fe312a4aa2d", ResourceVersion:"1103", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 30, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cf648ccbb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-0-241", ContainerID:"63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323", Pod:"calico-apiserver-7cf648ccbb-wj8jt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3cd4f335fc4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:31:22.962080 containerd[1476]: 2025-05-17 00:31:22.917 [INFO][5933] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a" May 17 00:31:22.962080 containerd[1476]: 2025-05-17 00:31:22.917 [INFO][5933] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a" iface="eth0" netns="" May 17 00:31:22.962080 containerd[1476]: 2025-05-17 00:31:22.917 [INFO][5933] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a" May 17 00:31:22.962080 containerd[1476]: 2025-05-17 00:31:22.917 [INFO][5933] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a" May 17 00:31:22.962080 containerd[1476]: 2025-05-17 00:31:22.952 [INFO][5941] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a" HandleID="k8s-pod-network.436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a" Workload="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--wj8jt-eth0" May 17 00:31:22.962080 containerd[1476]: 2025-05-17 00:31:22.952 [INFO][5941] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:31:22.962080 containerd[1476]: 2025-05-17 00:31:22.952 [INFO][5941] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:31:22.962080 containerd[1476]: 2025-05-17 00:31:22.957 [WARNING][5941] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a" HandleID="k8s-pod-network.436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a" Workload="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--wj8jt-eth0" May 17 00:31:22.962080 containerd[1476]: 2025-05-17 00:31:22.957 [INFO][5941] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a" HandleID="k8s-pod-network.436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a" Workload="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--wj8jt-eth0" May 17 00:31:22.962080 containerd[1476]: 2025-05-17 00:31:22.958 [INFO][5941] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:31:22.962080 containerd[1476]: 2025-05-17 00:31:22.960 [INFO][5933] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a" May 17 00:31:22.962080 containerd[1476]: time="2025-05-17T00:31:22.962028101Z" level=info msg="TearDown network for sandbox \"436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a\" successfully" May 17 00:31:22.966323 containerd[1476]: time="2025-05-17T00:31:22.966295764Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:31:22.966372 containerd[1476]: time="2025-05-17T00:31:22.966344314Z" level=info msg="RemovePodSandbox \"436844fb8ee43f34d59d05f1fae944fa50a76bfbb57e02ae3161b2e261e2fc0a\" returns successfully" May 17 00:31:22.966864 containerd[1476]: time="2025-05-17T00:31:22.966762394Z" level=info msg="StopPodSandbox for \"ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe\"" May 17 00:31:23.029586 containerd[1476]: 2025-05-17 00:31:22.995 [WARNING][5955] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe" WorkloadEndpoint="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--chqjq-eth0" May 17 00:31:23.029586 containerd[1476]: 2025-05-17 00:31:22.995 [INFO][5955] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe" May 17 00:31:23.029586 containerd[1476]: 2025-05-17 00:31:22.995 [INFO][5955] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe" iface="eth0" netns="" May 17 00:31:23.029586 containerd[1476]: 2025-05-17 00:31:22.995 [INFO][5955] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe" May 17 00:31:23.029586 containerd[1476]: 2025-05-17 00:31:22.995 [INFO][5955] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe" May 17 00:31:23.029586 containerd[1476]: 2025-05-17 00:31:23.018 [INFO][5962] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe" HandleID="k8s-pod-network.ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe" Workload="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--chqjq-eth0" May 17 00:31:23.029586 containerd[1476]: 2025-05-17 00:31:23.018 [INFO][5962] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:31:23.029586 containerd[1476]: 2025-05-17 00:31:23.018 [INFO][5962] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:31:23.029586 containerd[1476]: 2025-05-17 00:31:23.023 [WARNING][5962] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe" HandleID="k8s-pod-network.ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe" Workload="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--chqjq-eth0" May 17 00:31:23.029586 containerd[1476]: 2025-05-17 00:31:23.023 [INFO][5962] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe" HandleID="k8s-pod-network.ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe" Workload="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--chqjq-eth0" May 17 00:31:23.029586 containerd[1476]: 2025-05-17 00:31:23.024 [INFO][5962] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:31:23.029586 containerd[1476]: 2025-05-17 00:31:23.026 [INFO][5955] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe" May 17 00:31:23.030126 containerd[1476]: time="2025-05-17T00:31:23.029911779Z" level=info msg="TearDown network for sandbox \"ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe\" successfully" May 17 00:31:23.030126 containerd[1476]: time="2025-05-17T00:31:23.029961419Z" level=info msg="StopPodSandbox for \"ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe\" returns successfully" May 17 00:31:23.030495 containerd[1476]: time="2025-05-17T00:31:23.030462259Z" level=info msg="RemovePodSandbox for \"ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe\"" May 17 00:31:23.030540 containerd[1476]: time="2025-05-17T00:31:23.030496279Z" level=info msg="Forcibly stopping sandbox \"ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe\"" May 17 00:31:23.106576 containerd[1476]: 2025-05-17 00:31:23.066 [WARNING][5976] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe" WorkloadEndpoint="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--chqjq-eth0" May 17 00:31:23.106576 containerd[1476]: 2025-05-17 00:31:23.066 [INFO][5976] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe" May 17 00:31:23.106576 containerd[1476]: 2025-05-17 00:31:23.066 [INFO][5976] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe" iface="eth0" netns="" May 17 00:31:23.106576 containerd[1476]: 2025-05-17 00:31:23.066 [INFO][5976] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe" May 17 00:31:23.106576 containerd[1476]: 2025-05-17 00:31:23.067 [INFO][5976] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe" May 17 00:31:23.106576 containerd[1476]: 2025-05-17 00:31:23.090 [INFO][5983] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe" HandleID="k8s-pod-network.ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe" Workload="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--chqjq-eth0" May 17 00:31:23.106576 containerd[1476]: 2025-05-17 00:31:23.090 [INFO][5983] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:31:23.106576 containerd[1476]: 2025-05-17 00:31:23.091 [INFO][5983] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:31:23.106576 containerd[1476]: 2025-05-17 00:31:23.099 [WARNING][5983] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe" HandleID="k8s-pod-network.ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe" Workload="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--chqjq-eth0" May 17 00:31:23.106576 containerd[1476]: 2025-05-17 00:31:23.099 [INFO][5983] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe" HandleID="k8s-pod-network.ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe" Workload="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--chqjq-eth0" May 17 00:31:23.106576 containerd[1476]: 2025-05-17 00:31:23.100 [INFO][5983] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:31:23.106576 containerd[1476]: 2025-05-17 00:31:23.103 [INFO][5976] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe" May 17 00:31:23.106976 containerd[1476]: time="2025-05-17T00:31:23.106657060Z" level=info msg="TearDown network for sandbox \"ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe\" successfully" May 17 00:31:23.111631 containerd[1476]: time="2025-05-17T00:31:23.110331633Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:31:23.111631 containerd[1476]: time="2025-05-17T00:31:23.110392483Z" level=info msg="RemovePodSandbox \"ecf94f8049a0359ba334ed431597ac43e3826112ce75c975eb65f938c0debebe\" returns successfully" May 17 00:31:23.111903 containerd[1476]: time="2025-05-17T00:31:23.111762004Z" level=info msg="StopPodSandbox for \"65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d\"" May 17 00:31:23.224307 containerd[1476]: 2025-05-17 00:31:23.174 [WARNING][5997] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d" WorkloadEndpoint="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--chqjq-eth0" May 17 00:31:23.224307 containerd[1476]: 2025-05-17 00:31:23.174 [INFO][5997] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d" May 17 00:31:23.224307 containerd[1476]: 2025-05-17 00:31:23.174 [INFO][5997] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d" iface="eth0" netns="" May 17 00:31:23.224307 containerd[1476]: 2025-05-17 00:31:23.174 [INFO][5997] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d" May 17 00:31:23.224307 containerd[1476]: 2025-05-17 00:31:23.174 [INFO][5997] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d" May 17 00:31:23.224307 containerd[1476]: 2025-05-17 00:31:23.205 [INFO][6004] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d" HandleID="k8s-pod-network.65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d" Workload="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--chqjq-eth0" May 17 00:31:23.224307 containerd[1476]: 2025-05-17 00:31:23.205 [INFO][6004] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:31:23.224307 containerd[1476]: 2025-05-17 00:31:23.205 [INFO][6004] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:31:23.224307 containerd[1476]: 2025-05-17 00:31:23.217 [WARNING][6004] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d" HandleID="k8s-pod-network.65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d" Workload="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--chqjq-eth0" May 17 00:31:23.224307 containerd[1476]: 2025-05-17 00:31:23.218 [INFO][6004] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d" HandleID="k8s-pod-network.65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d" Workload="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--chqjq-eth0" May 17 00:31:23.224307 containerd[1476]: 2025-05-17 00:31:23.219 [INFO][6004] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:31:23.224307 containerd[1476]: 2025-05-17 00:31:23.222 [INFO][5997] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d" May 17 00:31:23.226219 containerd[1476]: time="2025-05-17T00:31:23.224725300Z" level=info msg="TearDown network for sandbox \"65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d\" successfully" May 17 00:31:23.226219 containerd[1476]: time="2025-05-17T00:31:23.224766350Z" level=info msg="StopPodSandbox for \"65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d\" returns successfully" May 17 00:31:23.226219 containerd[1476]: time="2025-05-17T00:31:23.225807011Z" level=info msg="RemovePodSandbox for \"65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d\"" May 17 00:31:23.226219 containerd[1476]: time="2025-05-17T00:31:23.225827161Z" level=info msg="Forcibly stopping sandbox \"65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d\"" May 17 00:31:23.312833 containerd[1476]: 2025-05-17 00:31:23.276 [WARNING][6018] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d" WorkloadEndpoint="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--chqjq-eth0" May 17 00:31:23.312833 containerd[1476]: 2025-05-17 00:31:23.276 [INFO][6018] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d" May 17 00:31:23.312833 containerd[1476]: 2025-05-17 00:31:23.276 [INFO][6018] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d" iface="eth0" netns="" May 17 00:31:23.312833 containerd[1476]: 2025-05-17 00:31:23.276 [INFO][6018] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d" May 17 00:31:23.312833 containerd[1476]: 2025-05-17 00:31:23.276 [INFO][6018] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d" May 17 00:31:23.312833 containerd[1476]: 2025-05-17 00:31:23.297 [INFO][6025] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d" HandleID="k8s-pod-network.65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d" Workload="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--chqjq-eth0" May 17 00:31:23.312833 containerd[1476]: 2025-05-17 00:31:23.298 [INFO][6025] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:31:23.312833 containerd[1476]: 2025-05-17 00:31:23.298 [INFO][6025] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:31:23.312833 containerd[1476]: 2025-05-17 00:31:23.305 [WARNING][6025] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d" HandleID="k8s-pod-network.65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d" Workload="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--chqjq-eth0" May 17 00:31:23.312833 containerd[1476]: 2025-05-17 00:31:23.305 [INFO][6025] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d" HandleID="k8s-pod-network.65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d" Workload="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--chqjq-eth0" May 17 00:31:23.312833 containerd[1476]: 2025-05-17 00:31:23.307 [INFO][6025] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:31:23.312833 containerd[1476]: 2025-05-17 00:31:23.309 [INFO][6018] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d" May 17 00:31:23.312833 containerd[1476]: time="2025-05-17T00:31:23.312567370Z" level=info msg="TearDown network for sandbox \"65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d\" successfully" May 17 00:31:23.316775 containerd[1476]: time="2025-05-17T00:31:23.316286452Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:31:23.316775 containerd[1476]: time="2025-05-17T00:31:23.316345892Z" level=info msg="RemovePodSandbox \"65c027ea84481c2d751d396ba2104c400876aaa95f5461ad2b07bbee6e88f05d\" returns successfully" May 17 00:31:23.317174 containerd[1476]: time="2025-05-17T00:31:23.316872043Z" level=info msg="StopPodSandbox for \"07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0\"" May 17 00:31:23.372889 containerd[1476]: 2025-05-17 00:31:23.346 [WARNING][6040] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--0--241-k8s-calico--apiserver--59c6b49969--lmb87-eth0", GenerateName:"calico-apiserver-59c6b49969-", Namespace:"calico-apiserver", SelfLink:"", UID:"dd5a12e6-0476-4ee4-9663-5e2d40e20810", ResourceVersion:"1133", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 30, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59c6b49969", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-0-241", ContainerID:"dca13b1dac7873ae1c93b71a9e6942a1a9ad75d63588c0df6457ea5e166425b6", Pod:"calico-apiserver-59c6b49969-lmb87", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4dbc812d449", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:31:23.372889 containerd[1476]: 2025-05-17 00:31:23.347 [INFO][6040] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0" May 17 00:31:23.372889 containerd[1476]: 2025-05-17 00:31:23.347 [INFO][6040] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0" iface="eth0" netns="" May 17 00:31:23.372889 containerd[1476]: 2025-05-17 00:31:23.347 [INFO][6040] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0" May 17 00:31:23.372889 containerd[1476]: 2025-05-17 00:31:23.347 [INFO][6040] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0" May 17 00:31:23.372889 containerd[1476]: 2025-05-17 00:31:23.364 [INFO][6047] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0" HandleID="k8s-pod-network.07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0" Workload="172--232--0--241-k8s-calico--apiserver--59c6b49969--lmb87-eth0" May 17 00:31:23.372889 containerd[1476]: 2025-05-17 00:31:23.364 [INFO][6047] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:31:23.372889 containerd[1476]: 2025-05-17 00:31:23.365 [INFO][6047] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:31:23.372889 containerd[1476]: 2025-05-17 00:31:23.368 [WARNING][6047] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0" HandleID="k8s-pod-network.07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0" Workload="172--232--0--241-k8s-calico--apiserver--59c6b49969--lmb87-eth0" May 17 00:31:23.372889 containerd[1476]: 2025-05-17 00:31:23.368 [INFO][6047] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0" HandleID="k8s-pod-network.07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0" Workload="172--232--0--241-k8s-calico--apiserver--59c6b49969--lmb87-eth0" May 17 00:31:23.372889 containerd[1476]: 2025-05-17 00:31:23.369 [INFO][6047] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:31:23.372889 containerd[1476]: 2025-05-17 00:31:23.371 [INFO][6040] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0" May 17 00:31:23.373337 containerd[1476]: time="2025-05-17T00:31:23.372940361Z" level=info msg="TearDown network for sandbox \"07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0\" successfully" May 17 00:31:23.373337 containerd[1476]: time="2025-05-17T00:31:23.372963971Z" level=info msg="StopPodSandbox for \"07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0\" returns successfully" May 17 00:31:23.373555 containerd[1476]: time="2025-05-17T00:31:23.373527721Z" level=info msg="RemovePodSandbox for \"07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0\"" May 17 00:31:23.373676 containerd[1476]: time="2025-05-17T00:31:23.373653591Z" level=info msg="Forcibly stopping sandbox \"07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0\"" May 17 00:31:23.437368 containerd[1476]: 2025-05-17 00:31:23.401 [WARNING][6061] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--0--241-k8s-calico--apiserver--59c6b49969--lmb87-eth0", GenerateName:"calico-apiserver-59c6b49969-", Namespace:"calico-apiserver", SelfLink:"", UID:"dd5a12e6-0476-4ee4-9663-5e2d40e20810", ResourceVersion:"1133", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 30, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59c6b49969", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-0-241", ContainerID:"dca13b1dac7873ae1c93b71a9e6942a1a9ad75d63588c0df6457ea5e166425b6", Pod:"calico-apiserver-59c6b49969-lmb87", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4dbc812d449", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:31:23.437368 containerd[1476]: 2025-05-17 00:31:23.401 [INFO][6061] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0" May 17 00:31:23.437368 containerd[1476]: 2025-05-17 00:31:23.401 [INFO][6061] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0" iface="eth0" netns="" May 17 00:31:23.437368 containerd[1476]: 2025-05-17 00:31:23.401 [INFO][6061] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0" May 17 00:31:23.437368 containerd[1476]: 2025-05-17 00:31:23.401 [INFO][6061] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0" May 17 00:31:23.437368 containerd[1476]: 2025-05-17 00:31:23.420 [INFO][6068] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0" HandleID="k8s-pod-network.07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0" Workload="172--232--0--241-k8s-calico--apiserver--59c6b49969--lmb87-eth0" May 17 00:31:23.437368 containerd[1476]: 2025-05-17 00:31:23.420 [INFO][6068] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:31:23.437368 containerd[1476]: 2025-05-17 00:31:23.420 [INFO][6068] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:31:23.437368 containerd[1476]: 2025-05-17 00:31:23.429 [WARNING][6068] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0" HandleID="k8s-pod-network.07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0" Workload="172--232--0--241-k8s-calico--apiserver--59c6b49969--lmb87-eth0" May 17 00:31:23.437368 containerd[1476]: 2025-05-17 00:31:23.429 [INFO][6068] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0" HandleID="k8s-pod-network.07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0" Workload="172--232--0--241-k8s-calico--apiserver--59c6b49969--lmb87-eth0" May 17 00:31:23.437368 containerd[1476]: 2025-05-17 00:31:23.431 [INFO][6068] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:31:23.437368 containerd[1476]: 2025-05-17 00:31:23.433 [INFO][6061] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0" May 17 00:31:23.437739 containerd[1476]: time="2025-05-17T00:31:23.437486014Z" level=info msg="TearDown network for sandbox \"07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0\" successfully" May 17 00:31:23.442750 containerd[1476]: time="2025-05-17T00:31:23.442485178Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:31:23.442750 containerd[1476]: time="2025-05-17T00:31:23.442577858Z" level=info msg="RemovePodSandbox \"07ddb1e26e089471337f5de9e3b720618ed6aa21daa68a7d499ba62a5cfa80d0\" returns successfully" May 17 00:31:23.444010 containerd[1476]: time="2025-05-17T00:31:23.443551668Z" level=info msg="StopPodSandbox for \"e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef\"" May 17 00:31:23.536683 containerd[1476]: 2025-05-17 00:31:23.483 [WARNING][6082] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--0--241-k8s-goldmane--8f77d7b6c--s52mw-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"ee80876b-aa39-4375-a4e1-fd4e85f8d3ee", ResourceVersion:"1203", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 30, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-0-241", ContainerID:"3d4ba11f90b82c0929e6bb6830774f0624a3c92876f6ebe4619cf78401685235", Pod:"goldmane-8f77d7b6c-s52mw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.114.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5e2011c45fb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:31:23.536683 containerd[1476]: 2025-05-17 00:31:23.485 [INFO][6082] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef" May 17 00:31:23.536683 containerd[1476]: 2025-05-17 00:31:23.485 [INFO][6082] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef" iface="eth0" netns="" May 17 00:31:23.536683 containerd[1476]: 2025-05-17 00:31:23.485 [INFO][6082] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef" May 17 00:31:23.536683 containerd[1476]: 2025-05-17 00:31:23.485 [INFO][6082] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef" May 17 00:31:23.536683 containerd[1476]: 2025-05-17 00:31:23.522 [INFO][6089] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef" HandleID="k8s-pod-network.e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef" Workload="172--232--0--241-k8s-goldmane--8f77d7b6c--s52mw-eth0" May 17 00:31:23.536683 containerd[1476]: 2025-05-17 00:31:23.522 [INFO][6089] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:31:23.536683 containerd[1476]: 2025-05-17 00:31:23.522 [INFO][6089] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:31:23.536683 containerd[1476]: 2025-05-17 00:31:23.529 [WARNING][6089] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef" HandleID="k8s-pod-network.e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef" Workload="172--232--0--241-k8s-goldmane--8f77d7b6c--s52mw-eth0" May 17 00:31:23.536683 containerd[1476]: 2025-05-17 00:31:23.529 [INFO][6089] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef" HandleID="k8s-pod-network.e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef" Workload="172--232--0--241-k8s-goldmane--8f77d7b6c--s52mw-eth0" May 17 00:31:23.536683 containerd[1476]: 2025-05-17 00:31:23.531 [INFO][6089] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:31:23.536683 containerd[1476]: 2025-05-17 00:31:23.533 [INFO][6082] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef" May 17 00:31:23.537912 containerd[1476]: time="2025-05-17T00:31:23.537492122Z" level=info msg="TearDown network for sandbox \"e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef\" successfully" May 17 00:31:23.537912 containerd[1476]: time="2025-05-17T00:31:23.537518182Z" level=info msg="StopPodSandbox for \"e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef\" returns successfully" May 17 00:31:23.538834 containerd[1476]: time="2025-05-17T00:31:23.538548543Z" level=info msg="RemovePodSandbox for \"e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef\"" May 17 00:31:23.538834 containerd[1476]: time="2025-05-17T00:31:23.538574783Z" level=info msg="Forcibly stopping sandbox \"e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef\"" May 17 00:31:23.603076 containerd[1476]: 2025-05-17 00:31:23.574 [WARNING][6103] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--0--241-k8s-goldmane--8f77d7b6c--s52mw-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"ee80876b-aa39-4375-a4e1-fd4e85f8d3ee", ResourceVersion:"1203", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 30, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-0-241", ContainerID:"3d4ba11f90b82c0929e6bb6830774f0624a3c92876f6ebe4619cf78401685235", Pod:"goldmane-8f77d7b6c-s52mw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.114.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5e2011c45fb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:31:23.603076 containerd[1476]: 2025-05-17 00:31:23.574 [INFO][6103] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef" May 17 00:31:23.603076 containerd[1476]: 2025-05-17 00:31:23.574 [INFO][6103] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef" iface="eth0" netns="" May 17 00:31:23.603076 containerd[1476]: 2025-05-17 00:31:23.574 [INFO][6103] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef" May 17 00:31:23.603076 containerd[1476]: 2025-05-17 00:31:23.574 [INFO][6103] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef" May 17 00:31:23.603076 containerd[1476]: 2025-05-17 00:31:23.595 [INFO][6111] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef" HandleID="k8s-pod-network.e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef" Workload="172--232--0--241-k8s-goldmane--8f77d7b6c--s52mw-eth0" May 17 00:31:23.603076 containerd[1476]: 2025-05-17 00:31:23.595 [INFO][6111] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:31:23.603076 containerd[1476]: 2025-05-17 00:31:23.595 [INFO][6111] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:31:23.603076 containerd[1476]: 2025-05-17 00:31:23.598 [WARNING][6111] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef" HandleID="k8s-pod-network.e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef" Workload="172--232--0--241-k8s-goldmane--8f77d7b6c--s52mw-eth0" May 17 00:31:23.603076 containerd[1476]: 2025-05-17 00:31:23.598 [INFO][6111] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef" HandleID="k8s-pod-network.e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef" Workload="172--232--0--241-k8s-goldmane--8f77d7b6c--s52mw-eth0" May 17 00:31:23.603076 containerd[1476]: 2025-05-17 00:31:23.599 [INFO][6111] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:31:23.603076 containerd[1476]: 2025-05-17 00:31:23.601 [INFO][6103] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef" May 17 00:31:23.603399 containerd[1476]: time="2025-05-17T00:31:23.603108606Z" level=info msg="TearDown network for sandbox \"e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef\" successfully" May 17 00:31:23.607729 containerd[1476]: time="2025-05-17T00:31:23.607694579Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:31:23.607772 containerd[1476]: time="2025-05-17T00:31:23.607745679Z" level=info msg="RemovePodSandbox \"e35e91a7a43580568a1680a4f64d1f103c18acc4bfd67e2e94a651786beb31ef\" returns successfully" May 17 00:31:23.608206 containerd[1476]: time="2025-05-17T00:31:23.608183550Z" level=info msg="StopPodSandbox for \"2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341\"" May 17 00:31:23.665524 containerd[1476]: 2025-05-17 00:31:23.633 [WARNING][6125] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--0--241-k8s-coredns--7c65d6cfc9--wvn6c-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"02488dc1-7388-4c3e-bda7-2622333fb0c8", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 30, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-0-241", ContainerID:"89aea15238afcffdcae05dcbfc9b363a5cdb8bc029502c966ac085d211253fcf", Pod:"coredns-7c65d6cfc9-wvn6c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali402c77c3fb8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:31:23.665524 containerd[1476]: 2025-05-17 00:31:23.633 [INFO][6125] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341" May 17 00:31:23.665524 containerd[1476]: 2025-05-17 00:31:23.634 [INFO][6125] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341" iface="eth0" netns="" May 17 00:31:23.665524 containerd[1476]: 2025-05-17 00:31:23.634 [INFO][6125] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341" May 17 00:31:23.665524 containerd[1476]: 2025-05-17 00:31:23.634 [INFO][6125] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341" May 17 00:31:23.665524 containerd[1476]: 2025-05-17 00:31:23.650 [INFO][6132] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341" HandleID="k8s-pod-network.2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341" Workload="172--232--0--241-k8s-coredns--7c65d6cfc9--wvn6c-eth0" May 17 00:31:23.665524 containerd[1476]: 2025-05-17 00:31:23.650 [INFO][6132] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:31:23.665524 containerd[1476]: 2025-05-17 00:31:23.651 [INFO][6132] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:31:23.665524 containerd[1476]: 2025-05-17 00:31:23.657 [WARNING][6132] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341" HandleID="k8s-pod-network.2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341" Workload="172--232--0--241-k8s-coredns--7c65d6cfc9--wvn6c-eth0" May 17 00:31:23.665524 containerd[1476]: 2025-05-17 00:31:23.657 [INFO][6132] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341" HandleID="k8s-pod-network.2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341" Workload="172--232--0--241-k8s-coredns--7c65d6cfc9--wvn6c-eth0" May 17 00:31:23.665524 containerd[1476]: 2025-05-17 00:31:23.659 [INFO][6132] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:31:23.665524 containerd[1476]: 2025-05-17 00:31:23.660 [INFO][6125] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341" May 17 00:31:23.665955 containerd[1476]: time="2025-05-17T00:31:23.665843469Z" level=info msg="TearDown network for sandbox \"2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341\" successfully" May 17 00:31:23.665955 containerd[1476]: time="2025-05-17T00:31:23.665867789Z" level=info msg="StopPodSandbox for \"2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341\" returns successfully" May 17 00:31:23.666450 containerd[1476]: time="2025-05-17T00:31:23.666386279Z" level=info msg="RemovePodSandbox for \"2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341\"" May 17 00:31:23.666552 containerd[1476]: time="2025-05-17T00:31:23.666421899Z" level=info msg="Forcibly stopping sandbox \"2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341\"" May 17 00:31:23.739203 containerd[1476]: 2025-05-17 00:31:23.711 [WARNING][6147] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--0--241-k8s-coredns--7c65d6cfc9--wvn6c-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"02488dc1-7388-4c3e-bda7-2622333fb0c8", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 30, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-0-241", ContainerID:"89aea15238afcffdcae05dcbfc9b363a5cdb8bc029502c966ac085d211253fcf", Pod:"coredns-7c65d6cfc9-wvn6c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali402c77c3fb8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:31:23.739203 containerd[1476]: 2025-05-17 00:31:23.711 [INFO][6147] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341" May 17 00:31:23.739203 containerd[1476]: 2025-05-17 00:31:23.711 [INFO][6147] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341" iface="eth0" netns="" May 17 00:31:23.739203 containerd[1476]: 2025-05-17 00:31:23.711 [INFO][6147] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341" May 17 00:31:23.739203 containerd[1476]: 2025-05-17 00:31:23.711 [INFO][6147] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341" May 17 00:31:23.739203 containerd[1476]: 2025-05-17 00:31:23.729 [INFO][6154] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341" HandleID="k8s-pod-network.2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341" Workload="172--232--0--241-k8s-coredns--7c65d6cfc9--wvn6c-eth0" May 17 00:31:23.739203 containerd[1476]: 2025-05-17 00:31:23.729 [INFO][6154] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:31:23.739203 containerd[1476]: 2025-05-17 00:31:23.729 [INFO][6154] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:31:23.739203 containerd[1476]: 2025-05-17 00:31:23.734 [WARNING][6154] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341" HandleID="k8s-pod-network.2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341" Workload="172--232--0--241-k8s-coredns--7c65d6cfc9--wvn6c-eth0" May 17 00:31:23.739203 containerd[1476]: 2025-05-17 00:31:23.734 [INFO][6154] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341" HandleID="k8s-pod-network.2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341" Workload="172--232--0--241-k8s-coredns--7c65d6cfc9--wvn6c-eth0" May 17 00:31:23.739203 containerd[1476]: 2025-05-17 00:31:23.735 [INFO][6154] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:31:23.739203 containerd[1476]: 2025-05-17 00:31:23.737 [INFO][6147] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341" May 17 00:31:23.739823 containerd[1476]: time="2025-05-17T00:31:23.739239148Z" level=info msg="TearDown network for sandbox \"2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341\" successfully" May 17 00:31:23.743207 containerd[1476]: time="2025-05-17T00:31:23.743175111Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:31:23.743272 containerd[1476]: time="2025-05-17T00:31:23.743248591Z" level=info msg="RemovePodSandbox \"2d09e58cd1d5114bd8c2c446bffcaea65c518f0a4afc6a3297cfe9af8e135341\" returns successfully" May 17 00:31:23.744158 containerd[1476]: time="2025-05-17T00:31:23.744125342Z" level=info msg="StopPodSandbox for \"e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86\"" May 17 00:31:23.808132 containerd[1476]: 2025-05-17 00:31:23.774 [WARNING][6168] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--0--241-k8s-csi--node--driver--h9kj7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0996e84d-dd0b-49e3-addd-0931e48a258e", ResourceVersion:"1123", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 30, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-0-241", ContainerID:"3430b6a776e3c50bb388d2267fb80c3edd4fc6178e70a9f248ed9d7b230af993", Pod:"csi-node-driver-h9kj7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.114.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic47f256aa13", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:31:23.808132 containerd[1476]: 2025-05-17 00:31:23.774 [INFO][6168] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86" May 17 00:31:23.808132 containerd[1476]: 2025-05-17 00:31:23.774 [INFO][6168] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86" iface="eth0" netns="" May 17 00:31:23.808132 containerd[1476]: 2025-05-17 00:31:23.774 [INFO][6168] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86" May 17 00:31:23.808132 containerd[1476]: 2025-05-17 00:31:23.774 [INFO][6168] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86" May 17 00:31:23.808132 containerd[1476]: 2025-05-17 00:31:23.794 [INFO][6175] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86" HandleID="k8s-pod-network.e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86" Workload="172--232--0--241-k8s-csi--node--driver--h9kj7-eth0" May 17 00:31:23.808132 containerd[1476]: 2025-05-17 00:31:23.794 [INFO][6175] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:31:23.808132 containerd[1476]: 2025-05-17 00:31:23.794 [INFO][6175] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:31:23.808132 containerd[1476]: 2025-05-17 00:31:23.799 [WARNING][6175] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86" HandleID="k8s-pod-network.e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86" Workload="172--232--0--241-k8s-csi--node--driver--h9kj7-eth0" May 17 00:31:23.808132 containerd[1476]: 2025-05-17 00:31:23.799 [INFO][6175] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86" HandleID="k8s-pod-network.e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86" Workload="172--232--0--241-k8s-csi--node--driver--h9kj7-eth0" May 17 00:31:23.808132 containerd[1476]: 2025-05-17 00:31:23.800 [INFO][6175] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:31:23.808132 containerd[1476]: 2025-05-17 00:31:23.802 [INFO][6168] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86" May 17 00:31:23.808132 containerd[1476]: time="2025-05-17T00:31:23.807848425Z" level=info msg="TearDown network for sandbox \"e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86\" successfully" May 17 00:31:23.808132 containerd[1476]: time="2025-05-17T00:31:23.807968145Z" level=info msg="StopPodSandbox for \"e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86\" returns successfully" May 17 00:31:23.810488 containerd[1476]: time="2025-05-17T00:31:23.809831426Z" level=info msg="RemovePodSandbox for \"e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86\"" May 17 00:31:23.810488 containerd[1476]: time="2025-05-17T00:31:23.809854276Z" level=info msg="Forcibly stopping sandbox \"e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86\"" May 17 00:31:23.870177 containerd[1476]: 2025-05-17 00:31:23.841 [WARNING][6190] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--0--241-k8s-csi--node--driver--h9kj7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0996e84d-dd0b-49e3-addd-0931e48a258e", ResourceVersion:"1123", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 30, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-0-241", ContainerID:"3430b6a776e3c50bb388d2267fb80c3edd4fc6178e70a9f248ed9d7b230af993", Pod:"csi-node-driver-h9kj7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.114.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic47f256aa13", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:31:23.870177 containerd[1476]: 2025-05-17 00:31:23.841 [INFO][6190] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86" May 17 00:31:23.870177 containerd[1476]: 2025-05-17 00:31:23.841 [INFO][6190] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86" iface="eth0" netns="" May 17 00:31:23.870177 containerd[1476]: 2025-05-17 00:31:23.841 [INFO][6190] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86" May 17 00:31:23.870177 containerd[1476]: 2025-05-17 00:31:23.841 [INFO][6190] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86" May 17 00:31:23.870177 containerd[1476]: 2025-05-17 00:31:23.861 [INFO][6197] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86" HandleID="k8s-pod-network.e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86" Workload="172--232--0--241-k8s-csi--node--driver--h9kj7-eth0" May 17 00:31:23.870177 containerd[1476]: 2025-05-17 00:31:23.861 [INFO][6197] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:31:23.870177 containerd[1476]: 2025-05-17 00:31:23.861 [INFO][6197] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:31:23.870177 containerd[1476]: 2025-05-17 00:31:23.865 [WARNING][6197] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86" HandleID="k8s-pod-network.e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86" Workload="172--232--0--241-k8s-csi--node--driver--h9kj7-eth0" May 17 00:31:23.870177 containerd[1476]: 2025-05-17 00:31:23.865 [INFO][6197] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86" HandleID="k8s-pod-network.e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86" Workload="172--232--0--241-k8s-csi--node--driver--h9kj7-eth0" May 17 00:31:23.870177 containerd[1476]: 2025-05-17 00:31:23.866 [INFO][6197] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:31:23.870177 containerd[1476]: 2025-05-17 00:31:23.868 [INFO][6190] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86" May 17 00:31:23.871598 containerd[1476]: time="2025-05-17T00:31:23.870519987Z" level=info msg="TearDown network for sandbox \"e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86\" successfully" May 17 00:31:23.873635 containerd[1476]: time="2025-05-17T00:31:23.873603949Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:31:23.873769 containerd[1476]: time="2025-05-17T00:31:23.873669979Z" level=info msg="RemovePodSandbox \"e852fa0156a6ad2c15eff9db0a0a6d4fde1f81ff0c80a0cbf59bc5e95e627e86\" returns successfully" May 17 00:31:23.874241 containerd[1476]: time="2025-05-17T00:31:23.874121520Z" level=info msg="StopPodSandbox for \"c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b\"" May 17 00:31:23.932806 containerd[1476]: 2025-05-17 00:31:23.899 [WARNING][6211] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--0--241-k8s-calico--kube--controllers--96dc47b75--xvwdn-eth0", GenerateName:"calico-kube-controllers-96dc47b75-", Namespace:"calico-system", SelfLink:"", UID:"f71e5f0b-7c52-4c28-8833-5eea34a70a67", ResourceVersion:"1127", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 30, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"96dc47b75", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-0-241", ContainerID:"2db6fd5bc05eab87414552b4c4e505d5a6f18813d4634992c73f4122e73d4738", Pod:"calico-kube-controllers-96dc47b75-xvwdn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.114.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali55893d4ae14", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:31:23.932806 containerd[1476]: 2025-05-17 00:31:23.899 [INFO][6211] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b" May 17 00:31:23.932806 containerd[1476]: 2025-05-17 00:31:23.899 [INFO][6211] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b" iface="eth0" netns="" May 17 00:31:23.932806 containerd[1476]: 2025-05-17 00:31:23.899 [INFO][6211] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b" May 17 00:31:23.932806 containerd[1476]: 2025-05-17 00:31:23.899 [INFO][6211] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b" May 17 00:31:23.932806 containerd[1476]: 2025-05-17 00:31:23.922 [INFO][6218] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b" HandleID="k8s-pod-network.c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b" Workload="172--232--0--241-k8s-calico--kube--controllers--96dc47b75--xvwdn-eth0" May 17 00:31:23.932806 containerd[1476]: 2025-05-17 00:31:23.922 [INFO][6218] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:31:23.932806 containerd[1476]: 2025-05-17 00:31:23.922 [INFO][6218] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:31:23.932806 containerd[1476]: 2025-05-17 00:31:23.926 [WARNING][6218] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b" HandleID="k8s-pod-network.c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b" Workload="172--232--0--241-k8s-calico--kube--controllers--96dc47b75--xvwdn-eth0" May 17 00:31:23.932806 containerd[1476]: 2025-05-17 00:31:23.926 [INFO][6218] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b" HandleID="k8s-pod-network.c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b" Workload="172--232--0--241-k8s-calico--kube--controllers--96dc47b75--xvwdn-eth0" May 17 00:31:23.932806 containerd[1476]: 2025-05-17 00:31:23.927 [INFO][6218] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:31:23.932806 containerd[1476]: 2025-05-17 00:31:23.929 [INFO][6211] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b" May 17 00:31:23.933566 containerd[1476]: time="2025-05-17T00:31:23.933264080Z" level=info msg="TearDown network for sandbox \"c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b\" successfully" May 17 00:31:23.933566 containerd[1476]: time="2025-05-17T00:31:23.933322460Z" level=info msg="StopPodSandbox for \"c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b\" returns successfully" May 17 00:31:23.933845 containerd[1476]: time="2025-05-17T00:31:23.933806900Z" level=info msg="RemovePodSandbox for \"c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b\"" May 17 00:31:23.933845 containerd[1476]: time="2025-05-17T00:31:23.933842020Z" level=info msg="Forcibly stopping sandbox \"c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b\"" May 17 00:31:24.025526 containerd[1476]: 2025-05-17 00:31:23.969 [WARNING][6232] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--0--241-k8s-calico--kube--controllers--96dc47b75--xvwdn-eth0", GenerateName:"calico-kube-controllers-96dc47b75-", Namespace:"calico-system", SelfLink:"", UID:"f71e5f0b-7c52-4c28-8833-5eea34a70a67", ResourceVersion:"1127", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 30, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"96dc47b75", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-0-241", ContainerID:"2db6fd5bc05eab87414552b4c4e505d5a6f18813d4634992c73f4122e73d4738", Pod:"calico-kube-controllers-96dc47b75-xvwdn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.114.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali55893d4ae14", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:31:24.025526 containerd[1476]: 2025-05-17 00:31:23.969 [INFO][6232] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b" May 17 00:31:24.025526 containerd[1476]: 2025-05-17 00:31:23.969 [INFO][6232] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b" iface="eth0" netns="" May 17 00:31:24.025526 containerd[1476]: 2025-05-17 00:31:23.969 [INFO][6232] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b" May 17 00:31:24.025526 containerd[1476]: 2025-05-17 00:31:23.969 [INFO][6232] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b" May 17 00:31:24.025526 containerd[1476]: 2025-05-17 00:31:24.011 [INFO][6239] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b" HandleID="k8s-pod-network.c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b" Workload="172--232--0--241-k8s-calico--kube--controllers--96dc47b75--xvwdn-eth0" May 17 00:31:24.025526 containerd[1476]: 2025-05-17 00:31:24.013 [INFO][6239] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:31:24.025526 containerd[1476]: 2025-05-17 00:31:24.013 [INFO][6239] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:31:24.025526 containerd[1476]: 2025-05-17 00:31:24.017 [WARNING][6239] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b" HandleID="k8s-pod-network.c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b" Workload="172--232--0--241-k8s-calico--kube--controllers--96dc47b75--xvwdn-eth0" May 17 00:31:24.025526 containerd[1476]: 2025-05-17 00:31:24.017 [INFO][6239] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b" HandleID="k8s-pod-network.c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b" Workload="172--232--0--241-k8s-calico--kube--controllers--96dc47b75--xvwdn-eth0" May 17 00:31:24.025526 containerd[1476]: 2025-05-17 00:31:24.019 [INFO][6239] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:31:24.025526 containerd[1476]: 2025-05-17 00:31:24.021 [INFO][6232] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b" May 17 00:31:24.025870 containerd[1476]: time="2025-05-17T00:31:24.025595121Z" level=info msg="TearDown network for sandbox \"c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b\" successfully" May 17 00:31:24.029451 containerd[1476]: time="2025-05-17T00:31:24.029404203Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:31:24.029518 containerd[1476]: time="2025-05-17T00:31:24.029477263Z" level=info msg="RemovePodSandbox \"c0c0520291e07242d9d9ef507cea64fa2228ff0780a65a70a67de7c96950ca4b\" returns successfully" May 17 00:31:24.030777 containerd[1476]: time="2025-05-17T00:31:24.030751104Z" level=info msg="StopPodSandbox for \"d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71\"" May 17 00:31:24.128337 containerd[1476]: 2025-05-17 00:31:24.081 [WARNING][6253] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--0--241-k8s-coredns--7c65d6cfc9--q7n6z-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"eb74581d-78ad-4419-8238-b440c64be7cd", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 30, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-0-241", ContainerID:"3bea63c492823ca9f94319928bdb55ece135a13b5fe1fd1191557981da832918", Pod:"coredns-7c65d6cfc9-q7n6z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib0c6e6f1a25", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:31:24.128337 containerd[1476]: 2025-05-17 00:31:24.082 [INFO][6253] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71" May 17 00:31:24.128337 containerd[1476]: 2025-05-17 00:31:24.082 [INFO][6253] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71" iface="eth0" netns="" May 17 00:31:24.128337 containerd[1476]: 2025-05-17 00:31:24.082 [INFO][6253] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71" May 17 00:31:24.128337 containerd[1476]: 2025-05-17 00:31:24.082 [INFO][6253] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71" May 17 00:31:24.128337 containerd[1476]: 2025-05-17 00:31:24.116 [INFO][6261] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71" HandleID="k8s-pod-network.d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71" Workload="172--232--0--241-k8s-coredns--7c65d6cfc9--q7n6z-eth0" May 17 00:31:24.128337 containerd[1476]: 2025-05-17 00:31:24.117 [INFO][6261] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:31:24.128337 containerd[1476]: 2025-05-17 00:31:24.117 [INFO][6261] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:31:24.128337 containerd[1476]: 2025-05-17 00:31:24.121 [WARNING][6261] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71" HandleID="k8s-pod-network.d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71" Workload="172--232--0--241-k8s-coredns--7c65d6cfc9--q7n6z-eth0" May 17 00:31:24.128337 containerd[1476]: 2025-05-17 00:31:24.121 [INFO][6261] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71" HandleID="k8s-pod-network.d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71" Workload="172--232--0--241-k8s-coredns--7c65d6cfc9--q7n6z-eth0" May 17 00:31:24.128337 containerd[1476]: 2025-05-17 00:31:24.123 [INFO][6261] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:31:24.128337 containerd[1476]: 2025-05-17 00:31:24.125 [INFO][6253] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71" May 17 00:31:24.128337 containerd[1476]: time="2025-05-17T00:31:24.128122676Z" level=info msg="TearDown network for sandbox \"d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71\" successfully" May 17 00:31:24.128337 containerd[1476]: time="2025-05-17T00:31:24.128146456Z" level=info msg="StopPodSandbox for \"d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71\" returns successfully" May 17 00:31:24.132471 containerd[1476]: time="2025-05-17T00:31:24.132344139Z" level=info msg="RemovePodSandbox for \"d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71\"" May 17 00:31:24.132471 containerd[1476]: time="2025-05-17T00:31:24.132376499Z" level=info msg="Forcibly stopping sandbox \"d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71\"" May 17 00:31:24.191257 containerd[1476]: 2025-05-17 00:31:24.161 [WARNING][6275] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--0--241-k8s-coredns--7c65d6cfc9--q7n6z-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"eb74581d-78ad-4419-8238-b440c64be7cd", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 30, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-0-241", ContainerID:"3bea63c492823ca9f94319928bdb55ece135a13b5fe1fd1191557981da832918", Pod:"coredns-7c65d6cfc9-q7n6z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib0c6e6f1a25", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:31:24.191257 containerd[1476]: 2025-05-17 00:31:24.161 [INFO][6275] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71" May 17 00:31:24.191257 containerd[1476]: 2025-05-17 00:31:24.161 [INFO][6275] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71" iface="eth0" netns="" May 17 00:31:24.191257 containerd[1476]: 2025-05-17 00:31:24.161 [INFO][6275] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71" May 17 00:31:24.191257 containerd[1476]: 2025-05-17 00:31:24.161 [INFO][6275] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71" May 17 00:31:24.191257 containerd[1476]: 2025-05-17 00:31:24.181 [INFO][6282] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71" HandleID="k8s-pod-network.d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71" Workload="172--232--0--241-k8s-coredns--7c65d6cfc9--q7n6z-eth0" May 17 00:31:24.191257 containerd[1476]: 2025-05-17 00:31:24.181 [INFO][6282] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:31:24.191257 containerd[1476]: 2025-05-17 00:31:24.181 [INFO][6282] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:31:24.191257 containerd[1476]: 2025-05-17 00:31:24.185 [WARNING][6282] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71" HandleID="k8s-pod-network.d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71" Workload="172--232--0--241-k8s-coredns--7c65d6cfc9--q7n6z-eth0" May 17 00:31:24.191257 containerd[1476]: 2025-05-17 00:31:24.185 [INFO][6282] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71" HandleID="k8s-pod-network.d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71" Workload="172--232--0--241-k8s-coredns--7c65d6cfc9--q7n6z-eth0" May 17 00:31:24.191257 containerd[1476]: 2025-05-17 00:31:24.186 [INFO][6282] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:31:24.191257 containerd[1476]: 2025-05-17 00:31:24.187 [INFO][6275] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71" May 17 00:31:24.191257 containerd[1476]: time="2025-05-17T00:31:24.190206735Z" level=info msg="TearDown network for sandbox \"d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71\" successfully" May 17 00:31:24.193745 containerd[1476]: time="2025-05-17T00:31:24.193709328Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:31:24.193781 containerd[1476]: time="2025-05-17T00:31:24.193772088Z" level=info msg="RemovePodSandbox \"d879fb83310468e269cd1b1653fbfaf35f7a309eec7f9644f3e409f69115db71\" returns successfully" May 17 00:31:25.693181 kubelet[2529]: E0517 00:31:25.693081 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-s52mw" podUID="ee80876b-aa39-4375-a4e1-fd4e85f8d3ee" May 17 00:31:31.693651 containerd[1476]: time="2025-05-17T00:31:31.693619952Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:31:31.794561 containerd[1476]: time="2025-05-17T00:31:31.794526322Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:31:31.795265 containerd[1476]: time="2025-05-17T00:31:31.795198813Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:31:31.795331 containerd[1476]: time="2025-05-17T00:31:31.795298213Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:31:31.795470 kubelet[2529]: E0517 00:31:31.795405 2529 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:31:31.795824 kubelet[2529]: E0517 00:31:31.795474 2529 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:31:31.795824 kubelet[2529]: E0517 00:31:31.795567 2529 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:be8615eacac5472da34b065a5f473380,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hkjvk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-767b6d8985-vppnt_calico-system(a77cac63-6e4c-448a-ad97-4b194bdcbe50): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:31:31.797506 containerd[1476]: time="2025-05-17T00:31:31.797472943Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:31:31.889833 containerd[1476]: time="2025-05-17T00:31:31.889790201Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:31:31.890621 containerd[1476]: time="2025-05-17T00:31:31.890545251Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:31:31.890621 containerd[1476]: time="2025-05-17T00:31:31.890588221Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:31:31.890701 kubelet[2529]: E0517 00:31:31.890673 2529 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:31:31.890789 kubelet[2529]: E0517 00:31:31.890700 2529 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:31:31.890789 kubelet[2529]: E0517 00:31:31.890761 2529 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hkjvk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-767b6d8985-vppnt_calico-system(a77cac63-6e4c-448a-ad97-4b194bdcbe50): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:31:31.892041 kubelet[2529]: E0517 00:31:31.892012 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-767b6d8985-vppnt" podUID="a77cac63-6e4c-448a-ad97-4b194bdcbe50" May 17 00:31:37.031882 systemd[1]: run-containerd-runc-k8s.io-2ca5c69bc4059e9420a5f5ff12df96d0fa39a50d7a10a2b320a0ed8e6bb8d7d6-runc.EzqlSU.mount: Deactivated successfully. May 17 00:31:39.075778 kubelet[2529]: I0517 00:31:39.073879 2529 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:31:39.111584 containerd[1476]: time="2025-05-17T00:31:39.111547944Z" level=info msg="StopContainer for \"132ad095d234b6a521180295f049f43fbe4ae9c298f193433f3981143619b362\" with timeout 30 (s)" May 17 00:31:39.111981 containerd[1476]: time="2025-05-17T00:31:39.111946524Z" level=info msg="Stop container \"132ad095d234b6a521180295f049f43fbe4ae9c298f193433f3981143619b362\" with signal terminated" May 17 00:31:39.291201 systemd[1]: cri-containerd-132ad095d234b6a521180295f049f43fbe4ae9c298f193433f3981143619b362.scope: Deactivated successfully. May 17 00:31:39.324387 containerd[1476]: time="2025-05-17T00:31:39.324130945Z" level=info msg="shim disconnected" id=132ad095d234b6a521180295f049f43fbe4ae9c298f193433f3981143619b362 namespace=k8s.io May 17 00:31:39.324387 containerd[1476]: time="2025-05-17T00:31:39.324176245Z" level=warning msg="cleaning up after shim disconnected" id=132ad095d234b6a521180295f049f43fbe4ae9c298f193433f3981143619b362 namespace=k8s.io May 17 00:31:39.324387 containerd[1476]: time="2025-05-17T00:31:39.324183505Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:31:39.325865 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-132ad095d234b6a521180295f049f43fbe4ae9c298f193433f3981143619b362-rootfs.mount: Deactivated successfully. May 17 00:31:39.357030 containerd[1476]: time="2025-05-17T00:31:39.356985213Z" level=info msg="StopContainer for \"132ad095d234b6a521180295f049f43fbe4ae9c298f193433f3981143619b362\" returns successfully" May 17 00:31:39.357897 containerd[1476]: time="2025-05-17T00:31:39.357844583Z" level=info msg="StopPodSandbox for \"63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323\"" May 17 00:31:39.358702 containerd[1476]: time="2025-05-17T00:31:39.358664123Z" level=info msg="Container to stop \"132ad095d234b6a521180295f049f43fbe4ae9c298f193433f3981143619b362\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:31:39.365562 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323-shm.mount: Deactivated successfully. May 17 00:31:39.382737 systemd[1]: cri-containerd-63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323.scope: Deactivated successfully. May 17 00:31:39.404112 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323-rootfs.mount: Deactivated successfully. May 17 00:31:39.405163 containerd[1476]: time="2025-05-17T00:31:39.404690554Z" level=info msg="shim disconnected" id=63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323 namespace=k8s.io May 17 00:31:39.405163 containerd[1476]: time="2025-05-17T00:31:39.405050384Z" level=warning msg="cleaning up after shim disconnected" id=63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323 namespace=k8s.io May 17 00:31:39.405163 containerd[1476]: time="2025-05-17T00:31:39.405061244Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:31:39.463765 systemd-networkd[1399]: cali3cd4f335fc4: Link DOWN May 17 00:31:39.464155 systemd-networkd[1399]: cali3cd4f335fc4: Lost carrier May 17 00:31:39.519176 containerd[1476]: 2025-05-17 00:31:39.461 [INFO][6395] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323" May 17 00:31:39.519176 containerd[1476]: 2025-05-17 00:31:39.461 [INFO][6395] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323" iface="eth0" netns="/var/run/netns/cni-ac939d34-22d5-0f7f-341a-e21b635171e9" May 17 00:31:39.519176 containerd[1476]: 2025-05-17 00:31:39.461 [INFO][6395] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323" iface="eth0" netns="/var/run/netns/cni-ac939d34-22d5-0f7f-341a-e21b635171e9" May 17 00:31:39.519176 containerd[1476]: 2025-05-17 00:31:39.469 [INFO][6395] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323" after=8.009852ms iface="eth0" netns="/var/run/netns/cni-ac939d34-22d5-0f7f-341a-e21b635171e9" May 17 00:31:39.519176 containerd[1476]: 2025-05-17 00:31:39.469 [INFO][6395] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323" May 17 00:31:39.519176 containerd[1476]: 2025-05-17 00:31:39.469 [INFO][6395] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323" May 17 00:31:39.519176 containerd[1476]: 2025-05-17 00:31:39.490 [INFO][6405] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323" HandleID="k8s-pod-network.63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323" Workload="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--wj8jt-eth0" May 17 00:31:39.519176 containerd[1476]: 2025-05-17 00:31:39.490 [INFO][6405] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:31:39.519176 containerd[1476]: 2025-05-17 00:31:39.490 [INFO][6405] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:31:39.519176 containerd[1476]: 2025-05-17 00:31:39.514 [INFO][6405] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323" HandleID="k8s-pod-network.63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323" Workload="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--wj8jt-eth0" May 17 00:31:39.519176 containerd[1476]: 2025-05-17 00:31:39.514 [INFO][6405] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323" HandleID="k8s-pod-network.63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323" Workload="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--wj8jt-eth0" May 17 00:31:39.519176 containerd[1476]: 2025-05-17 00:31:39.515 [INFO][6405] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:31:39.519176 containerd[1476]: 2025-05-17 00:31:39.517 [INFO][6395] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323" May 17 00:31:39.521749 containerd[1476]: time="2025-05-17T00:31:39.521588332Z" level=info msg="TearDown network for sandbox \"63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323\" successfully" May 17 00:31:39.521749 containerd[1476]: time="2025-05-17T00:31:39.521613462Z" level=info msg="StopPodSandbox for \"63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323\" returns successfully" May 17 00:31:39.522467 systemd[1]: run-netns-cni\x2dac939d34\x2d22d5\x2d0f7f\x2d341a\x2de21b635171e9.mount: Deactivated successfully. May 17 00:31:39.681566 kubelet[2529]: I0517 00:31:39.681054 2529 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7e0aafb5-c219-4523-9e5b-1fe312a4aa2d-calico-apiserver-certs\") pod \"7e0aafb5-c219-4523-9e5b-1fe312a4aa2d\" (UID: \"7e0aafb5-c219-4523-9e5b-1fe312a4aa2d\") " May 17 00:31:39.681566 kubelet[2529]: I0517 00:31:39.681101 2529 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxlkq\" (UniqueName: \"kubernetes.io/projected/7e0aafb5-c219-4523-9e5b-1fe312a4aa2d-kube-api-access-jxlkq\") pod \"7e0aafb5-c219-4523-9e5b-1fe312a4aa2d\" (UID: \"7e0aafb5-c219-4523-9e5b-1fe312a4aa2d\") " May 17 00:31:39.686894 kubelet[2529]: I0517 00:31:39.686648 2529 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e0aafb5-c219-4523-9e5b-1fe312a4aa2d-kube-api-access-jxlkq" (OuterVolumeSpecName: "kube-api-access-jxlkq") pod "7e0aafb5-c219-4523-9e5b-1fe312a4aa2d" (UID: "7e0aafb5-c219-4523-9e5b-1fe312a4aa2d"). InnerVolumeSpecName "kube-api-access-jxlkq". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:31:39.686894 kubelet[2529]: I0517 00:31:39.686808 2529 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e0aafb5-c219-4523-9e5b-1fe312a4aa2d-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "7e0aafb5-c219-4523-9e5b-1fe312a4aa2d" (UID: "7e0aafb5-c219-4523-9e5b-1fe312a4aa2d"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 17 00:31:39.688957 systemd[1]: var-lib-kubelet-pods-7e0aafb5\x2dc219\x2d4523\x2d9e5b\x2d1fe312a4aa2d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djxlkq.mount: Deactivated successfully. May 17 00:31:39.689069 systemd[1]: var-lib-kubelet-pods-7e0aafb5\x2dc219\x2d4523\x2d9e5b\x2d1fe312a4aa2d-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. May 17 00:31:39.694139 containerd[1476]: time="2025-05-17T00:31:39.693602354Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:31:39.781767 kubelet[2529]: I0517 00:31:39.781735 2529 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxlkq\" (UniqueName: \"kubernetes.io/projected/7e0aafb5-c219-4523-9e5b-1fe312a4aa2d-kube-api-access-jxlkq\") on node \"172-232-0-241\" DevicePath \"\"" May 17 00:31:39.781767 kubelet[2529]: I0517 00:31:39.781759 2529 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7e0aafb5-c219-4523-9e5b-1fe312a4aa2d-calico-apiserver-certs\") on node \"172-232-0-241\" DevicePath \"\"" May 17 00:31:39.812756 containerd[1476]: time="2025-05-17T00:31:39.812526992Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:31:39.813688 containerd[1476]: time="2025-05-17T00:31:39.813651852Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:31:39.813760 containerd[1476]: time="2025-05-17T00:31:39.813725123Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:31:39.813922 kubelet[2529]: E0517 00:31:39.813883 2529 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:31:39.813992 kubelet[2529]: E0517 00:31:39.813952 2529 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:31:39.814415 kubelet[2529]: E0517 00:31:39.814213 2529 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bnr67,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-s52mw_calico-system(ee80876b-aa39-4375-a4e1-fd4e85f8d3ee): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:31:39.815673 kubelet[2529]: E0517 00:31:39.815591 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-s52mw" podUID="ee80876b-aa39-4375-a4e1-fd4e85f8d3ee" May 17 00:31:40.051267 kubelet[2529]: I0517 00:31:40.051037 2529 scope.go:117] "RemoveContainer" containerID="132ad095d234b6a521180295f049f43fbe4ae9c298f193433f3981143619b362" May 17 00:31:40.053303 containerd[1476]: time="2025-05-17T00:31:40.052485089Z" level=info msg="RemoveContainer for \"132ad095d234b6a521180295f049f43fbe4ae9c298f193433f3981143619b362\"" May 17 00:31:40.055664 containerd[1476]: time="2025-05-17T00:31:40.055637920Z" level=info msg="RemoveContainer for \"132ad095d234b6a521180295f049f43fbe4ae9c298f193433f3981143619b362\" returns successfully" May 17 00:31:40.055794 kubelet[2529]: I0517 00:31:40.055779 2529 scope.go:117] "RemoveContainer" containerID="132ad095d234b6a521180295f049f43fbe4ae9c298f193433f3981143619b362" May 17 00:31:40.056049 containerd[1476]: time="2025-05-17T00:31:40.055985710Z" level=error msg="ContainerStatus for \"132ad095d234b6a521180295f049f43fbe4ae9c298f193433f3981143619b362\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"132ad095d234b6a521180295f049f43fbe4ae9c298f193433f3981143619b362\": not found" May 17 00:31:40.056154 kubelet[2529]: E0517 00:31:40.056130 2529 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"132ad095d234b6a521180295f049f43fbe4ae9c298f193433f3981143619b362\": not found" containerID="132ad095d234b6a521180295f049f43fbe4ae9c298f193433f3981143619b362" May 17 00:31:40.056154 kubelet[2529]: I0517 00:31:40.056151 2529 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"132ad095d234b6a521180295f049f43fbe4ae9c298f193433f3981143619b362"} err="failed to get container status \"132ad095d234b6a521180295f049f43fbe4ae9c298f193433f3981143619b362\": rpc error: code = NotFound desc = an error occurred when try to find container \"132ad095d234b6a521180295f049f43fbe4ae9c298f193433f3981143619b362\": not found" May 17 00:31:40.057756 systemd[1]: Removed slice kubepods-besteffort-pod7e0aafb5_c219_4523_9e5b_1fe312a4aa2d.slice - libcontainer container kubepods-besteffort-pod7e0aafb5_c219_4523_9e5b_1fe312a4aa2d.slice. May 17 00:31:40.661195 systemd[1]: run-containerd-runc-k8s.io-02493e37174686715eb0023272e65a12d0d9652b55bf8af5e5bb0dee61b5cfcd-runc.pDd522.mount: Deactivated successfully. May 17 00:31:40.697159 kubelet[2529]: I0517 00:31:40.697112 2529 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e0aafb5-c219-4523-9e5b-1fe312a4aa2d" path="/var/lib/kubelet/pods/7e0aafb5-c219-4523-9e5b-1fe312a4aa2d/volumes" May 17 00:31:46.694884 kubelet[2529]: E0517 00:31:46.693861 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:31:46.696662 kubelet[2529]: E0517 00:31:46.696102 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-767b6d8985-vppnt" podUID="a77cac63-6e4c-448a-ad97-4b194bdcbe50" May 17 00:31:52.694394 kubelet[2529]: E0517 00:31:52.693855 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-s52mw" podUID="ee80876b-aa39-4375-a4e1-fd4e85f8d3ee" May 17 00:31:58.692526 kubelet[2529]: E0517 00:31:58.692164 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:31:59.693546 kubelet[2529]: E0517 00:31:59.693510 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-767b6d8985-vppnt" podUID="a77cac63-6e4c-448a-ad97-4b194bdcbe50" May 17 00:32:04.692210 kubelet[2529]: E0517 00:32:04.692169 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:32:05.692527 kubelet[2529]: E0517 00:32:05.692395 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:32:05.693399 kubelet[2529]: E0517 00:32:05.693374 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-s52mw" podUID="ee80876b-aa39-4375-a4e1-fd4e85f8d3ee" May 17 00:32:09.693106 kubelet[2529]: E0517 00:32:09.692792 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:32:09.693643 kubelet[2529]: E0517 00:32:09.693050 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:32:10.657931 systemd[1]: run-containerd-runc-k8s.io-02493e37174686715eb0023272e65a12d0d9652b55bf8af5e5bb0dee61b5cfcd-runc.2xQr81.mount: Deactivated successfully. May 17 00:32:12.693930 containerd[1476]: time="2025-05-17T00:32:12.693709856Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:32:12.791614 containerd[1476]: time="2025-05-17T00:32:12.791569702Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:32:12.792657 containerd[1476]: time="2025-05-17T00:32:12.792611099Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:32:12.792726 containerd[1476]: time="2025-05-17T00:32:12.792659891Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:32:12.792818 kubelet[2529]: E0517 00:32:12.792784 2529 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:32:12.793310 kubelet[2529]: E0517 00:32:12.792838 2529 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:32:12.793310 kubelet[2529]: E0517 00:32:12.792930 2529 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:be8615eacac5472da34b065a5f473380,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hkjvk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-767b6d8985-vppnt_calico-system(a77cac63-6e4c-448a-ad97-4b194bdcbe50): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:32:12.794746 containerd[1476]: time="2025-05-17T00:32:12.794709205Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:32:12.887730 containerd[1476]: time="2025-05-17T00:32:12.887679494Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:32:12.888886 containerd[1476]: time="2025-05-17T00:32:12.888804075Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:32:12.888886 containerd[1476]: time="2025-05-17T00:32:12.888855407Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:32:12.889076 kubelet[2529]: E0517 00:32:12.889034 2529 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:32:12.889159 kubelet[2529]: E0517 00:32:12.889089 2529 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:32:12.889243 kubelet[2529]: E0517 00:32:12.889204 2529 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hkjvk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-767b6d8985-vppnt_calico-system(a77cac63-6e4c-448a-ad97-4b194bdcbe50): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:32:12.890563 kubelet[2529]: E0517 00:32:12.890512 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-767b6d8985-vppnt" podUID="a77cac63-6e4c-448a-ad97-4b194bdcbe50" May 17 00:32:20.693450 containerd[1476]: time="2025-05-17T00:32:20.693234889Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:32:20.790553 containerd[1476]: time="2025-05-17T00:32:20.790484027Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:32:20.791572 containerd[1476]: time="2025-05-17T00:32:20.791536478Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:32:20.791665 containerd[1476]: time="2025-05-17T00:32:20.791604120Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:32:20.791827 kubelet[2529]: E0517 00:32:20.791768 2529 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:32:20.792489 kubelet[2529]: E0517 00:32:20.791877 2529 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:32:20.792489 kubelet[2529]: E0517 00:32:20.792040 2529 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bnr67,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-s52mw_calico-system(ee80876b-aa39-4375-a4e1-fd4e85f8d3ee): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:32:20.793567 kubelet[2529]: E0517 00:32:20.793478 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-s52mw" podUID="ee80876b-aa39-4375-a4e1-fd4e85f8d3ee" May 17 00:32:24.196736 containerd[1476]: time="2025-05-17T00:32:24.196654557Z" level=info msg="StopPodSandbox for \"63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323\"" May 17 00:32:24.244520 containerd[1476]: 2025-05-17 00:32:24.222 [WARNING][6527] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323" WorkloadEndpoint="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--wj8jt-eth0" May 17 00:32:24.244520 containerd[1476]: 2025-05-17 00:32:24.222 [INFO][6527] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323" May 17 00:32:24.244520 containerd[1476]: 2025-05-17 00:32:24.222 [INFO][6527] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323" iface="eth0" netns="" May 17 00:32:24.244520 containerd[1476]: 2025-05-17 00:32:24.222 [INFO][6527] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323" May 17 00:32:24.244520 containerd[1476]: 2025-05-17 00:32:24.222 [INFO][6527] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323" May 17 00:32:24.244520 containerd[1476]: 2025-05-17 00:32:24.236 [INFO][6535] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323" HandleID="k8s-pod-network.63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323" Workload="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--wj8jt-eth0" May 17 00:32:24.244520 containerd[1476]: 2025-05-17 00:32:24.236 [INFO][6535] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:32:24.244520 containerd[1476]: 2025-05-17 00:32:24.236 [INFO][6535] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:32:24.244520 containerd[1476]: 2025-05-17 00:32:24.240 [WARNING][6535] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323" HandleID="k8s-pod-network.63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323" Workload="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--wj8jt-eth0" May 17 00:32:24.244520 containerd[1476]: 2025-05-17 00:32:24.240 [INFO][6535] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323" HandleID="k8s-pod-network.63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323" Workload="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--wj8jt-eth0" May 17 00:32:24.244520 containerd[1476]: 2025-05-17 00:32:24.241 [INFO][6535] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:32:24.244520 containerd[1476]: 2025-05-17 00:32:24.242 [INFO][6527] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323" May 17 00:32:24.244843 containerd[1476]: time="2025-05-17T00:32:24.244560675Z" level=info msg="TearDown network for sandbox \"63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323\" successfully" May 17 00:32:24.244843 containerd[1476]: time="2025-05-17T00:32:24.244583506Z" level=info msg="StopPodSandbox for \"63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323\" returns successfully" May 17 00:32:24.245076 containerd[1476]: time="2025-05-17T00:32:24.245058928Z" level=info msg="RemovePodSandbox for \"63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323\"" May 17 00:32:24.245116 containerd[1476]: time="2025-05-17T00:32:24.245084419Z" level=info msg="Forcibly stopping sandbox \"63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323\"" May 17 00:32:24.291949 containerd[1476]: 2025-05-17 00:32:24.268 [WARNING][6549] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323" WorkloadEndpoint="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--wj8jt-eth0" May 17 00:32:24.291949 containerd[1476]: 2025-05-17 00:32:24.268 [INFO][6549] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323" May 17 00:32:24.291949 containerd[1476]: 2025-05-17 00:32:24.268 [INFO][6549] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323" iface="eth0" netns="" May 17 00:32:24.291949 containerd[1476]: 2025-05-17 00:32:24.268 [INFO][6549] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323" May 17 00:32:24.291949 containerd[1476]: 2025-05-17 00:32:24.268 [INFO][6549] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323" May 17 00:32:24.291949 containerd[1476]: 2025-05-17 00:32:24.284 [INFO][6556] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323" HandleID="k8s-pod-network.63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323" Workload="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--wj8jt-eth0" May 17 00:32:24.291949 containerd[1476]: 2025-05-17 00:32:24.284 [INFO][6556] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:32:24.291949 containerd[1476]: 2025-05-17 00:32:24.284 [INFO][6556] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:32:24.291949 containerd[1476]: 2025-05-17 00:32:24.287 [WARNING][6556] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323" HandleID="k8s-pod-network.63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323" Workload="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--wj8jt-eth0" May 17 00:32:24.291949 containerd[1476]: 2025-05-17 00:32:24.287 [INFO][6556] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323" HandleID="k8s-pod-network.63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323" Workload="172--232--0--241-k8s-calico--apiserver--7cf648ccbb--wj8jt-eth0" May 17 00:32:24.291949 containerd[1476]: 2025-05-17 00:32:24.288 [INFO][6556] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:32:24.291949 containerd[1476]: 2025-05-17 00:32:24.290 [INFO][6549] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323" May 17 00:32:24.292260 containerd[1476]: time="2025-05-17T00:32:24.291980229Z" level=info msg="TearDown network for sandbox \"63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323\" successfully" May 17 00:32:24.295411 containerd[1476]: time="2025-05-17T00:32:24.295390172Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:32:24.296016 containerd[1476]: time="2025-05-17T00:32:24.295450913Z" level=info msg="RemovePodSandbox \"63653d713db3bccd4e52d8260dd52644024b90a2318394c1a0d2461043756323\" returns successfully" May 17 00:32:27.693237 kubelet[2529]: E0517 00:32:27.693121 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-767b6d8985-vppnt" podUID="a77cac63-6e4c-448a-ad97-4b194bdcbe50" May 17 00:32:31.693946 kubelet[2529]: E0517 00:32:31.693778 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-s52mw" podUID="ee80876b-aa39-4375-a4e1-fd4e85f8d3ee" May 17 00:32:37.692759 kubelet[2529]: E0517 00:32:37.692732 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:32:38.694356 kubelet[2529]: E0517 00:32:38.693766 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-767b6d8985-vppnt" podUID="a77cac63-6e4c-448a-ad97-4b194bdcbe50" May 17 00:32:40.435241 systemd[1]: Started sshd@7-172.232.0.241:22-139.178.89.65:60306.service - OpenSSH per-connection server daemon (139.178.89.65:60306). May 17 00:32:40.764736 sshd[6594]: Accepted publickey for core from 139.178.89.65 port 60306 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:32:40.767096 sshd[6594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:32:40.771310 systemd-logind[1455]: New session 8 of user core. May 17 00:32:40.777535 systemd[1]: Started session-8.scope - Session 8 of User core. May 17 00:32:41.061135 sshd[6594]: pam_unix(sshd:session): session closed for user core May 17 00:32:41.064997 systemd[1]: sshd@7-172.232.0.241:22-139.178.89.65:60306.service: Deactivated successfully. May 17 00:32:41.066533 systemd[1]: session-8.scope: Deactivated successfully. May 17 00:32:41.067037 systemd-logind[1455]: Session 8 logged out. Waiting for processes to exit. May 17 00:32:41.067972 systemd-logind[1455]: Removed session 8. May 17 00:32:43.693351 kubelet[2529]: E0517 00:32:43.693201 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-s52mw" podUID="ee80876b-aa39-4375-a4e1-fd4e85f8d3ee" May 17 00:32:46.121614 systemd[1]: Started sshd@8-172.232.0.241:22-139.178.89.65:60322.service - OpenSSH per-connection server daemon (139.178.89.65:60322). May 17 00:32:46.449597 sshd[6651]: Accepted publickey for core from 139.178.89.65 port 60322 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:32:46.451977 sshd[6651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:32:46.458186 systemd-logind[1455]: New session 9 of user core. May 17 00:32:46.462555 systemd[1]: Started session-9.scope - Session 9 of User core. May 17 00:32:46.733847 sshd[6651]: pam_unix(sshd:session): session closed for user core May 17 00:32:46.737137 systemd-logind[1455]: Session 9 logged out. Waiting for processes to exit. May 17 00:32:46.737452 systemd[1]: sshd@8-172.232.0.241:22-139.178.89.65:60322.service: Deactivated successfully. May 17 00:32:46.738877 systemd[1]: session-9.scope: Deactivated successfully. May 17 00:32:46.739643 systemd-logind[1455]: Removed session 9. May 17 00:32:46.796166 systemd[1]: Started sshd@9-172.232.0.241:22-139.178.89.65:36894.service - OpenSSH per-connection server daemon (139.178.89.65:36894). May 17 00:32:47.119606 sshd[6665]: Accepted publickey for core from 139.178.89.65 port 36894 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:32:47.121350 sshd[6665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:32:47.126168 systemd-logind[1455]: New session 10 of user core. May 17 00:32:47.130543 systemd[1]: Started session-10.scope - Session 10 of User core. May 17 00:32:47.440664 sshd[6665]: pam_unix(sshd:session): session closed for user core May 17 00:32:47.445769 systemd[1]: sshd@9-172.232.0.241:22-139.178.89.65:36894.service: Deactivated successfully. May 17 00:32:47.446162 systemd-logind[1455]: Session 10 logged out. Waiting for processes to exit. May 17 00:32:47.450069 systemd[1]: session-10.scope: Deactivated successfully. May 17 00:32:47.453471 systemd-logind[1455]: Removed session 10. May 17 00:32:47.507592 systemd[1]: Started sshd@10-172.232.0.241:22-139.178.89.65:36902.service - OpenSSH per-connection server daemon (139.178.89.65:36902). May 17 00:32:47.693630 kubelet[2529]: E0517 00:32:47.692342 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:32:47.829077 sshd[6676]: Accepted publickey for core from 139.178.89.65 port 36902 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:32:47.828878 sshd[6676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:32:47.837500 systemd-logind[1455]: New session 11 of user core. May 17 00:32:47.843461 systemd[1]: Started session-11.scope - Session 11 of User core. May 17 00:32:48.148724 sshd[6676]: pam_unix(sshd:session): session closed for user core May 17 00:32:48.152164 systemd[1]: sshd@10-172.232.0.241:22-139.178.89.65:36902.service: Deactivated successfully. May 17 00:32:48.154029 systemd[1]: session-11.scope: Deactivated successfully. May 17 00:32:48.155074 systemd-logind[1455]: Session 11 logged out. Waiting for processes to exit. May 17 00:32:48.155954 systemd-logind[1455]: Removed session 11. May 17 00:32:52.694760 kubelet[2529]: E0517 00:32:52.694531 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-767b6d8985-vppnt" podUID="a77cac63-6e4c-448a-ad97-4b194bdcbe50" May 17 00:32:53.214671 systemd[1]: Started sshd@11-172.232.0.241:22-139.178.89.65:36904.service - OpenSSH per-connection server daemon (139.178.89.65:36904). May 17 00:32:53.532361 sshd[6694]: Accepted publickey for core from 139.178.89.65 port 36904 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:32:53.534271 sshd[6694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:32:53.539256 systemd-logind[1455]: New session 12 of user core. May 17 00:32:53.542580 systemd[1]: Started session-12.scope - Session 12 of User core. May 17 00:32:53.838166 sshd[6694]: pam_unix(sshd:session): session closed for user core May 17 00:32:53.842698 systemd-logind[1455]: Session 12 logged out. Waiting for processes to exit. May 17 00:32:53.843816 systemd[1]: sshd@11-172.232.0.241:22-139.178.89.65:36904.service: Deactivated successfully. May 17 00:32:53.846196 systemd[1]: session-12.scope: Deactivated successfully. May 17 00:32:53.847051 systemd-logind[1455]: Removed session 12. May 17 00:32:54.693483 kubelet[2529]: E0517 00:32:54.693251 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-s52mw" podUID="ee80876b-aa39-4375-a4e1-fd4e85f8d3ee" May 17 00:32:58.906413 systemd[1]: Started sshd@12-172.232.0.241:22-139.178.89.65:42980.service - OpenSSH per-connection server daemon (139.178.89.65:42980). May 17 00:32:59.250461 sshd[6726]: Accepted publickey for core from 139.178.89.65 port 42980 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:32:59.252189 sshd[6726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:32:59.256489 systemd-logind[1455]: New session 13 of user core. May 17 00:32:59.261568 systemd[1]: Started session-13.scope - Session 13 of User core. May 17 00:32:59.560572 sshd[6726]: pam_unix(sshd:session): session closed for user core May 17 00:32:59.563438 systemd[1]: sshd@12-172.232.0.241:22-139.178.89.65:42980.service: Deactivated successfully. May 17 00:32:59.565198 systemd[1]: session-13.scope: Deactivated successfully. May 17 00:32:59.566695 systemd-logind[1455]: Session 13 logged out. Waiting for processes to exit. May 17 00:32:59.567767 systemd-logind[1455]: Removed session 13. May 17 00:33:03.693865 kubelet[2529]: E0517 00:33:03.693737 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-767b6d8985-vppnt" podUID="a77cac63-6e4c-448a-ad97-4b194bdcbe50" May 17 00:33:04.617615 systemd[1]: Started sshd@13-172.232.0.241:22-139.178.89.65:42988.service - OpenSSH per-connection server daemon (139.178.89.65:42988). May 17 00:33:04.935378 sshd[6743]: Accepted publickey for core from 139.178.89.65 port 42988 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:33:04.937049 sshd[6743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:33:04.940319 systemd-logind[1455]: New session 14 of user core. May 17 00:33:04.943542 systemd[1]: Started session-14.scope - Session 14 of User core. May 17 00:33:05.224891 sshd[6743]: pam_unix(sshd:session): session closed for user core May 17 00:33:05.227645 systemd[1]: sshd@13-172.232.0.241:22-139.178.89.65:42988.service: Deactivated successfully. May 17 00:33:05.229303 systemd[1]: session-14.scope: Deactivated successfully. May 17 00:33:05.230307 systemd-logind[1455]: Session 14 logged out. Waiting for processes to exit. May 17 00:33:05.231278 systemd-logind[1455]: Removed session 14. May 17 00:33:05.282987 systemd[1]: Started sshd@14-172.232.0.241:22-139.178.89.65:42994.service - OpenSSH per-connection server daemon (139.178.89.65:42994). May 17 00:33:05.600277 sshd[6756]: Accepted publickey for core from 139.178.89.65 port 42994 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:33:05.601749 sshd[6756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:33:05.605236 systemd-logind[1455]: New session 15 of user core. May 17 00:33:05.612532 systemd[1]: Started session-15.scope - Session 15 of User core. May 17 00:33:05.692488 kubelet[2529]: E0517 00:33:05.692446 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-s52mw" podUID="ee80876b-aa39-4375-a4e1-fd4e85f8d3ee" May 17 00:33:06.005295 sshd[6756]: pam_unix(sshd:session): session closed for user core May 17 00:33:06.007723 systemd[1]: sshd@14-172.232.0.241:22-139.178.89.65:42994.service: Deactivated successfully. May 17 00:33:06.009481 systemd[1]: session-15.scope: Deactivated successfully. May 17 00:33:06.010580 systemd-logind[1455]: Session 15 logged out. Waiting for processes to exit. May 17 00:33:06.011469 systemd-logind[1455]: Removed session 15. May 17 00:33:06.069133 systemd[1]: Started sshd@15-172.232.0.241:22-139.178.89.65:42996.service - OpenSSH per-connection server daemon (139.178.89.65:42996). May 17 00:33:06.397778 sshd[6767]: Accepted publickey for core from 139.178.89.65 port 42996 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:33:06.399562 sshd[6767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:33:06.403981 systemd-logind[1455]: New session 16 of user core. May 17 00:33:06.406534 systemd[1]: Started session-16.scope - Session 16 of User core. May 17 00:33:08.006786 sshd[6767]: pam_unix(sshd:session): session closed for user core May 17 00:33:08.010516 systemd[1]: sshd@15-172.232.0.241:22-139.178.89.65:42996.service: Deactivated successfully. May 17 00:33:08.012247 systemd[1]: session-16.scope: Deactivated successfully. May 17 00:33:08.012821 systemd-logind[1455]: Session 16 logged out. Waiting for processes to exit. May 17 00:33:08.013630 systemd-logind[1455]: Removed session 16. May 17 00:33:08.068224 systemd[1]: Started sshd@16-172.232.0.241:22-139.178.89.65:33740.service - OpenSSH per-connection server daemon (139.178.89.65:33740). May 17 00:33:08.408405 sshd[6804]: Accepted publickey for core from 139.178.89.65 port 33740 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:33:08.412102 sshd[6804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:33:08.415851 systemd-logind[1455]: New session 17 of user core. May 17 00:33:08.420530 systemd[1]: Started session-17.scope - Session 17 of User core. May 17 00:33:08.777724 sshd[6804]: pam_unix(sshd:session): session closed for user core May 17 00:33:08.780834 systemd[1]: sshd@16-172.232.0.241:22-139.178.89.65:33740.service: Deactivated successfully. May 17 00:33:08.782684 systemd[1]: session-17.scope: Deactivated successfully. May 17 00:33:08.783837 systemd-logind[1455]: Session 17 logged out. Waiting for processes to exit. May 17 00:33:08.784922 systemd-logind[1455]: Removed session 17. May 17 00:33:08.833142 systemd[1]: Started sshd@17-172.232.0.241:22-139.178.89.65:33750.service - OpenSSH per-connection server daemon (139.178.89.65:33750). May 17 00:33:09.146628 sshd[6815]: Accepted publickey for core from 139.178.89.65 port 33750 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:33:09.148485 sshd[6815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:33:09.152996 systemd-logind[1455]: New session 18 of user core. May 17 00:33:09.157536 systemd[1]: Started session-18.scope - Session 18 of User core. May 17 00:33:09.429365 sshd[6815]: pam_unix(sshd:session): session closed for user core May 17 00:33:09.432580 systemd[1]: sshd@17-172.232.0.241:22-139.178.89.65:33750.service: Deactivated successfully. May 17 00:33:09.434602 systemd[1]: session-18.scope: Deactivated successfully. May 17 00:33:09.436626 systemd-logind[1455]: Session 18 logged out. Waiting for processes to exit. May 17 00:33:09.438112 systemd-logind[1455]: Removed session 18. May 17 00:33:11.692374 kubelet[2529]: E0517 00:33:11.692328 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:33:14.489587 systemd[1]: Started sshd@18-172.232.0.241:22-139.178.89.65:33754.service - OpenSSH per-connection server daemon (139.178.89.65:33754). May 17 00:33:14.814519 sshd[6852]: Accepted publickey for core from 139.178.89.65 port 33754 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:33:14.816030 sshd[6852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:33:14.820524 systemd-logind[1455]: New session 19 of user core. May 17 00:33:14.823562 systemd[1]: Started session-19.scope - Session 19 of User core. May 17 00:33:15.115675 sshd[6852]: pam_unix(sshd:session): session closed for user core May 17 00:33:15.118536 systemd[1]: sshd@18-172.232.0.241:22-139.178.89.65:33754.service: Deactivated successfully. May 17 00:33:15.120553 systemd[1]: session-19.scope: Deactivated successfully. May 17 00:33:15.121636 systemd-logind[1455]: Session 19 logged out. Waiting for processes to exit. May 17 00:33:15.122551 systemd-logind[1455]: Removed session 19. May 17 00:33:17.693268 kubelet[2529]: E0517 00:33:17.693007 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-s52mw" podUID="ee80876b-aa39-4375-a4e1-fd4e85f8d3ee" May 17 00:33:17.693692 kubelet[2529]: E0517 00:33:17.693363 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-767b6d8985-vppnt" podUID="a77cac63-6e4c-448a-ad97-4b194bdcbe50" May 17 00:33:19.692619 kubelet[2529]: E0517 00:33:19.692586 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:33:20.190862 systemd[1]: Started sshd@19-172.232.0.241:22-139.178.89.65:39164.service - OpenSSH per-connection server daemon (139.178.89.65:39164). May 17 00:33:20.522796 sshd[6865]: Accepted publickey for core from 139.178.89.65 port 39164 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:33:20.524359 sshd[6865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:33:20.527990 systemd-logind[1455]: New session 20 of user core. May 17 00:33:20.533550 systemd[1]: Started session-20.scope - Session 20 of User core. May 17 00:33:20.693455 kubelet[2529]: E0517 00:33:20.692954 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:33:20.818923 sshd[6865]: pam_unix(sshd:session): session closed for user core May 17 00:33:20.823509 systemd-logind[1455]: Session 20 logged out. Waiting for processes to exit. May 17 00:33:20.824125 systemd[1]: sshd@19-172.232.0.241:22-139.178.89.65:39164.service: Deactivated successfully. May 17 00:33:20.827071 systemd[1]: session-20.scope: Deactivated successfully. May 17 00:33:20.828077 systemd-logind[1455]: Removed session 20. May 17 00:33:25.884140 systemd[1]: Started sshd@20-172.232.0.241:22-139.178.89.65:39168.service - OpenSSH per-connection server daemon (139.178.89.65:39168). May 17 00:33:26.204118 sshd[6880]: Accepted publickey for core from 139.178.89.65 port 39168 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:33:26.207813 sshd[6880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:33:26.213693 systemd-logind[1455]: New session 21 of user core. May 17 00:33:26.218831 systemd[1]: Started session-21.scope - Session 21 of User core. May 17 00:33:26.514052 sshd[6880]: pam_unix(sshd:session): session closed for user core May 17 00:33:26.519725 systemd-logind[1455]: Session 21 logged out. Waiting for processes to exit. May 17 00:33:26.521289 systemd[1]: sshd@20-172.232.0.241:22-139.178.89.65:39168.service: Deactivated successfully. May 17 00:33:26.526495 systemd[1]: session-21.scope: Deactivated successfully. May 17 00:33:26.528052 systemd-logind[1455]: Removed session 21. May 17 00:33:26.696467 kubelet[2529]: E0517 00:33:26.695883 2529 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 17 00:33:28.694290 kubelet[2529]: E0517 00:33:28.694001 2529 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-s52mw" podUID="ee80876b-aa39-4375-a4e1-fd4e85f8d3ee"