May 17 00:20:43.879891 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri May 16 22:44:56 -00 2025 May 17 00:20:43.879911 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:20:43.879919 kernel: BIOS-provided physical RAM map: May 17 00:20:43.879925 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable May 17 00:20:43.879930 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved May 17 00:20:43.879938 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 17 00:20:43.879944 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable May 17 00:20:43.879949 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved May 17 00:20:43.879955 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 17 00:20:43.879960 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 17 00:20:43.879966 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 17 00:20:43.879971 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 17 00:20:43.879976 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable May 17 00:20:43.879984 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 17 00:20:43.879991 kernel: NX (Execute Disable) protection: active May 17 00:20:43.879996 kernel: APIC: Static calls initialized May 17 00:20:43.880002 kernel: SMBIOS 2.8 present. May 17 00:20:43.880008 kernel: DMI: Linode Compute Instance, BIOS Not Specified May 17 00:20:43.880013 kernel: Hypervisor detected: KVM May 17 00:20:43.880021 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 17 00:20:43.880027 kernel: kvm-clock: using sched offset of 4616673316 cycles May 17 00:20:43.880032 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 17 00:20:43.880038 kernel: tsc: Detected 2000.000 MHz processor May 17 00:20:43.880045 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 17 00:20:43.880051 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 17 00:20:43.880057 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 May 17 00:20:43.880063 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 17 00:20:43.880069 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 17 00:20:43.880077 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 May 17 00:20:43.880082 kernel: Using GB pages for direct mapping May 17 00:20:43.880088 kernel: ACPI: Early table checksum verification disabled May 17 00:20:43.880094 kernel: ACPI: RSDP 0x00000000000F51B0 000014 (v00 BOCHS ) May 17 00:20:43.880100 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:20:43.880106 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:20:43.880111 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:20:43.880117 kernel: ACPI: FACS 0x000000007FFE0000 000040 May 17 00:20:43.880123 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:20:43.880131 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:20:43.880137 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:20:43.880143 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:20:43.880152 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] May 17 00:20:43.880158 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] May 17 00:20:43.882437 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] May 17 00:20:43.882453 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] May 17 00:20:43.882462 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] May 17 00:20:43.882469 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] May 17 00:20:43.882475 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] May 17 00:20:43.882481 kernel: No NUMA configuration found May 17 00:20:43.882488 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] May 17 00:20:43.882494 kernel: NODE_DATA(0) allocated [mem 0x17fff8000-0x17fffdfff] May 17 00:20:43.882500 kernel: Zone ranges: May 17 00:20:43.882520 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 17 00:20:43.882527 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 17 00:20:43.882533 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] May 17 00:20:43.882539 kernel: Movable zone start for each node May 17 00:20:43.882545 kernel: Early memory node ranges May 17 00:20:43.882551 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 17 00:20:43.882557 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] May 17 00:20:43.882563 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] May 17 00:20:43.882569 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] May 17 00:20:43.882576 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 17 00:20:43.882584 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 17 00:20:43.882590 kernel: On node 0, zone Normal: 35 pages in unavailable ranges May 17 00:20:43.882596 kernel: ACPI: PM-Timer IO Port: 0x608 May 17 00:20:43.882602 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 17 00:20:43.882608 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 17 00:20:43.882615 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 17 00:20:43.882621 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 17 00:20:43.882627 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 17 00:20:43.882633 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 17 00:20:43.882641 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 17 00:20:43.882648 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 17 00:20:43.882654 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 17 00:20:43.882660 kernel: TSC deadline timer available May 17 00:20:43.882666 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 17 00:20:43.882672 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 17 00:20:43.882678 kernel: kvm-guest: KVM setup pv remote TLB flush May 17 00:20:43.882684 kernel: kvm-guest: setup PV sched yield May 17 00:20:43.882690 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 17 00:20:43.882699 kernel: Booting paravirtualized kernel on KVM May 17 00:20:43.882706 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 17 00:20:43.882712 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 17 00:20:43.882718 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 May 17 00:20:43.882724 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 May 17 00:20:43.882730 kernel: pcpu-alloc: [0] 0 1 May 17 00:20:43.882736 kernel: kvm-guest: PV spinlocks enabled May 17 00:20:43.882743 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 17 00:20:43.882750 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:20:43.882759 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:20:43.882765 kernel: random: crng init done May 17 00:20:43.882771 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 17 00:20:43.882777 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:20:43.882783 kernel: Fallback order for Node 0: 0 May 17 00:20:43.882789 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 May 17 00:20:43.882795 kernel: Policy zone: Normal May 17 00:20:43.882801 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:20:43.882809 kernel: software IO TLB: area num 2. May 17 00:20:43.882816 kernel: Memory: 3966204K/4193772K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42872K init, 2320K bss, 227308K reserved, 0K cma-reserved) May 17 00:20:43.882822 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 17 00:20:43.882828 kernel: ftrace: allocating 37948 entries in 149 pages May 17 00:20:43.882834 kernel: ftrace: allocated 149 pages with 4 groups May 17 00:20:43.882840 kernel: Dynamic Preempt: voluntary May 17 00:20:43.882847 kernel: rcu: Preemptible hierarchical RCU implementation. May 17 00:20:43.882853 kernel: rcu: RCU event tracing is enabled. May 17 00:20:43.882860 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 17 00:20:43.882868 kernel: Trampoline variant of Tasks RCU enabled. May 17 00:20:43.882875 kernel: Rude variant of Tasks RCU enabled. May 17 00:20:43.882881 kernel: Tracing variant of Tasks RCU enabled. May 17 00:20:43.882887 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:20:43.882893 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 17 00:20:43.882899 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 17 00:20:43.882905 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 17 00:20:43.882911 kernel: Console: colour VGA+ 80x25 May 17 00:20:43.882917 kernel: printk: console [tty0] enabled May 17 00:20:43.882926 kernel: printk: console [ttyS0] enabled May 17 00:20:43.882932 kernel: ACPI: Core revision 20230628 May 17 00:20:43.882938 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 17 00:20:43.882944 kernel: APIC: Switch to symmetric I/O mode setup May 17 00:20:43.882958 kernel: x2apic enabled May 17 00:20:43.882966 kernel: APIC: Switched APIC routing to: physical x2apic May 17 00:20:43.882973 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 17 00:20:43.882979 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 17 00:20:43.882986 kernel: kvm-guest: setup PV IPIs May 17 00:20:43.882992 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 17 00:20:43.882999 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 17 00:20:43.883005 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) May 17 00:20:43.883014 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 17 00:20:43.883020 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 17 00:20:43.883027 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 17 00:20:43.883033 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 17 00:20:43.883040 kernel: Spectre V2 : Mitigation: Retpolines May 17 00:20:43.883048 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 17 00:20:43.883055 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls May 17 00:20:43.883061 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 17 00:20:43.883068 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 17 00:20:43.883075 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 17 00:20:43.883082 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 17 00:20:43.883088 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 17 00:20:43.883095 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 17 00:20:43.883103 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 17 00:20:43.883110 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 17 00:20:43.883116 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' May 17 00:20:43.883122 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 17 00:20:43.883129 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 May 17 00:20:43.883135 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. May 17 00:20:43.883142 kernel: Freeing SMP alternatives memory: 32K May 17 00:20:43.883148 kernel: pid_max: default: 32768 minimum: 301 May 17 00:20:43.883155 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 17 00:20:43.885188 kernel: landlock: Up and running. May 17 00:20:43.885197 kernel: SELinux: Initializing. May 17 00:20:43.885204 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:20:43.885211 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:20:43.885218 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) May 17 00:20:43.885224 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:20:43.885231 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:20:43.885237 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:20:43.885244 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 17 00:20:43.885254 kernel: ... version: 0 May 17 00:20:43.885260 kernel: ... bit width: 48 May 17 00:20:43.885267 kernel: ... generic registers: 6 May 17 00:20:43.885273 kernel: ... value mask: 0000ffffffffffff May 17 00:20:43.885279 kernel: ... max period: 00007fffffffffff May 17 00:20:43.885286 kernel: ... fixed-purpose events: 0 May 17 00:20:43.885292 kernel: ... event mask: 000000000000003f May 17 00:20:43.885298 kernel: signal: max sigframe size: 3376 May 17 00:20:43.885305 kernel: rcu: Hierarchical SRCU implementation. May 17 00:20:43.885314 kernel: rcu: Max phase no-delay instances is 400. May 17 00:20:43.885320 kernel: smp: Bringing up secondary CPUs ... May 17 00:20:43.885327 kernel: smpboot: x86: Booting SMP configuration: May 17 00:20:43.885333 kernel: .... node #0, CPUs: #1 May 17 00:20:43.885339 kernel: smp: Brought up 1 node, 2 CPUs May 17 00:20:43.885346 kernel: smpboot: Max logical packages: 1 May 17 00:20:43.885352 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) May 17 00:20:43.885358 kernel: devtmpfs: initialized May 17 00:20:43.885365 kernel: x86/mm: Memory block size: 128MB May 17 00:20:43.885373 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:20:43.885380 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 17 00:20:43.885386 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:20:43.885393 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:20:43.885399 kernel: audit: initializing netlink subsys (disabled) May 17 00:20:43.885406 kernel: audit: type=2000 audit(1747441243.410:1): state=initialized audit_enabled=0 res=1 May 17 00:20:43.885412 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:20:43.885418 kernel: thermal_sys: Registered thermal governor 'user_space' May 17 00:20:43.885425 kernel: cpuidle: using governor menu May 17 00:20:43.885433 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:20:43.885440 kernel: dca service started, version 1.12.1 May 17 00:20:43.885446 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 17 00:20:43.885453 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 17 00:20:43.885459 kernel: PCI: Using configuration type 1 for base access May 17 00:20:43.885466 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 17 00:20:43.885472 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:20:43.885479 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 17 00:20:43.885485 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:20:43.885494 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 17 00:20:43.885500 kernel: ACPI: Added _OSI(Module Device) May 17 00:20:43.885507 kernel: ACPI: Added _OSI(Processor Device) May 17 00:20:43.885513 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:20:43.885519 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:20:43.885526 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 17 00:20:43.885532 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 17 00:20:43.885538 kernel: ACPI: Interpreter enabled May 17 00:20:43.885545 kernel: ACPI: PM: (supports S0 S3 S5) May 17 00:20:43.885553 kernel: ACPI: Using IOAPIC for interrupt routing May 17 00:20:43.885560 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 17 00:20:43.885566 kernel: PCI: Using E820 reservations for host bridge windows May 17 00:20:43.885572 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 17 00:20:43.885579 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 17 00:20:43.885752 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:20:43.885876 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 17 00:20:43.885992 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 17 00:20:43.886002 kernel: PCI host bridge to bus 0000:00 May 17 00:20:43.886117 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 17 00:20:43.886260 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 17 00:20:43.886364 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 17 00:20:43.886464 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] May 17 00:20:43.886564 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 17 00:20:43.886664 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] May 17 00:20:43.886771 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 17 00:20:43.886906 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 17 00:20:43.887028 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 17 00:20:43.887139 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] May 17 00:20:43.887276 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] May 17 00:20:43.887388 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] May 17 00:20:43.887502 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 17 00:20:43.887621 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 May 17 00:20:43.887733 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] May 17 00:20:43.887843 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] May 17 00:20:43.887952 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] May 17 00:20:43.888071 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 May 17 00:20:43.891241 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] May 17 00:20:43.891376 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] May 17 00:20:43.891489 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] May 17 00:20:43.891601 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] May 17 00:20:43.891719 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 17 00:20:43.891845 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 17 00:20:43.891966 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 17 00:20:43.892083 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] May 17 00:20:43.892209 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] May 17 00:20:43.892329 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 17 00:20:43.892437 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 17 00:20:43.892447 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 17 00:20:43.892454 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 17 00:20:43.892461 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 17 00:20:43.892467 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 17 00:20:43.892477 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 17 00:20:43.892484 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 17 00:20:43.892490 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 17 00:20:43.892497 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 17 00:20:43.892503 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 17 00:20:43.892510 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 17 00:20:43.892516 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 17 00:20:43.892522 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 17 00:20:43.892529 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 17 00:20:43.892538 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 17 00:20:43.892544 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 17 00:20:43.892551 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 17 00:20:43.892557 kernel: iommu: Default domain type: Translated May 17 00:20:43.892564 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 17 00:20:43.892570 kernel: PCI: Using ACPI for IRQ routing May 17 00:20:43.892576 kernel: PCI: pci_cache_line_size set to 64 bytes May 17 00:20:43.892583 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] May 17 00:20:43.892589 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] May 17 00:20:43.892698 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 17 00:20:43.892806 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 17 00:20:43.892913 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 17 00:20:43.892922 kernel: vgaarb: loaded May 17 00:20:43.892929 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 17 00:20:43.892935 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 17 00:20:43.892942 kernel: clocksource: Switched to clocksource kvm-clock May 17 00:20:43.892948 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:20:43.892958 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:20:43.892964 kernel: pnp: PnP ACPI init May 17 00:20:43.893088 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved May 17 00:20:43.893098 kernel: pnp: PnP ACPI: found 5 devices May 17 00:20:43.893105 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 17 00:20:43.893112 kernel: NET: Registered PF_INET protocol family May 17 00:20:43.893118 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 00:20:43.893125 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 17 00:20:43.893134 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:20:43.893141 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 17 00:20:43.893147 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 17 00:20:43.893154 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 17 00:20:43.895554 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:20:43.895563 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:20:43.895570 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:20:43.895576 kernel: NET: Registered PF_XDP protocol family May 17 00:20:43.895690 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 17 00:20:43.895799 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 17 00:20:43.895899 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 17 00:20:43.896000 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] May 17 00:20:43.896099 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 17 00:20:43.896229 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] May 17 00:20:43.896240 kernel: PCI: CLS 0 bytes, default 64 May 17 00:20:43.896247 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 17 00:20:43.896254 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) May 17 00:20:43.896264 kernel: Initialise system trusted keyrings May 17 00:20:43.896271 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 17 00:20:43.896278 kernel: Key type asymmetric registered May 17 00:20:43.896284 kernel: Asymmetric key parser 'x509' registered May 17 00:20:43.896290 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 17 00:20:43.896297 kernel: io scheduler mq-deadline registered May 17 00:20:43.896303 kernel: io scheduler kyber registered May 17 00:20:43.896310 kernel: io scheduler bfq registered May 17 00:20:43.896316 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 17 00:20:43.896324 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 17 00:20:43.896333 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 17 00:20:43.896339 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:20:43.896345 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 17 00:20:43.896352 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 17 00:20:43.896359 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 17 00:20:43.896365 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 17 00:20:43.896371 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 17 00:20:43.896486 kernel: rtc_cmos 00:03: RTC can wake from S4 May 17 00:20:43.896596 kernel: rtc_cmos 00:03: registered as rtc0 May 17 00:20:43.896706 kernel: rtc_cmos 00:03: setting system clock to 2025-05-17T00:20:43 UTC (1747441243) May 17 00:20:43.896807 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 17 00:20:43.896816 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 17 00:20:43.896823 kernel: NET: Registered PF_INET6 protocol family May 17 00:20:43.896829 kernel: Segment Routing with IPv6 May 17 00:20:43.896836 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:20:43.896842 kernel: NET: Registered PF_PACKET protocol family May 17 00:20:43.896852 kernel: Key type dns_resolver registered May 17 00:20:43.896858 kernel: IPI shorthand broadcast: enabled May 17 00:20:43.896865 kernel: sched_clock: Marking stable (683003570, 207176012)->(949053935, -58874353) May 17 00:20:43.896871 kernel: registered taskstats version 1 May 17 00:20:43.896878 kernel: Loading compiled-in X.509 certificates May 17 00:20:43.896884 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 85b8d1234ceca483cb3defc2030d93f7792663c9' May 17 00:20:43.896890 kernel: Key type .fscrypt registered May 17 00:20:43.896897 kernel: Key type fscrypt-provisioning registered May 17 00:20:43.896903 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:20:43.896912 kernel: ima: Allocated hash algorithm: sha1 May 17 00:20:43.896918 kernel: ima: No architecture policies found May 17 00:20:43.896925 kernel: clk: Disabling unused clocks May 17 00:20:43.896931 kernel: Freeing unused kernel image (initmem) memory: 42872K May 17 00:20:43.896938 kernel: Write protecting the kernel read-only data: 36864k May 17 00:20:43.896944 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 17 00:20:43.896950 kernel: Run /init as init process May 17 00:20:43.896957 kernel: with arguments: May 17 00:20:43.896963 kernel: /init May 17 00:20:43.896972 kernel: with environment: May 17 00:20:43.896978 kernel: HOME=/ May 17 00:20:43.896985 kernel: TERM=linux May 17 00:20:43.896991 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:20:43.896999 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:20:43.897008 systemd[1]: Detected virtualization kvm. May 17 00:20:43.897015 systemd[1]: Detected architecture x86-64. May 17 00:20:43.897021 systemd[1]: Running in initrd. May 17 00:20:43.897030 systemd[1]: No hostname configured, using default hostname. May 17 00:20:43.897037 systemd[1]: Hostname set to . May 17 00:20:43.897044 systemd[1]: Initializing machine ID from random generator. May 17 00:20:43.897051 systemd[1]: Queued start job for default target initrd.target. May 17 00:20:43.897058 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:20:43.897078 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:20:43.897090 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 17 00:20:43.897097 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:20:43.897104 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 17 00:20:43.897111 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 17 00:20:43.897120 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 17 00:20:43.897127 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 17 00:20:43.897137 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:20:43.897144 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:20:43.897151 systemd[1]: Reached target paths.target - Path Units. May 17 00:20:43.897158 systemd[1]: Reached target slices.target - Slice Units. May 17 00:20:43.898220 systemd[1]: Reached target swap.target - Swaps. May 17 00:20:43.898229 systemd[1]: Reached target timers.target - Timer Units. May 17 00:20:43.898236 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:20:43.898244 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:20:43.898251 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 17 00:20:43.898262 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 17 00:20:43.898269 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:20:43.898276 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:20:43.898283 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:20:43.898290 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:20:43.898297 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 17 00:20:43.898304 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:20:43.898311 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 17 00:20:43.898318 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:20:43.898328 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:20:43.898335 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:20:43.898362 systemd-journald[177]: Collecting audit messages is disabled. May 17 00:20:43.898379 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:20:43.898390 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 17 00:20:43.898397 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:20:43.898407 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:20:43.898418 systemd-journald[177]: Journal started May 17 00:20:43.898433 systemd-journald[177]: Runtime Journal (/run/log/journal/93883c70eba4463a940dbe45a886250a) is 8.0M, max 78.3M, 70.3M free. May 17 00:20:43.897646 systemd-modules-load[178]: Inserted module 'overlay' May 17 00:20:43.950846 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:20:43.950870 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:20:43.950881 kernel: Bridge firewalling registered May 17 00:20:43.950890 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:20:43.921170 systemd-modules-load[178]: Inserted module 'br_netfilter' May 17 00:20:43.956467 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:20:43.957196 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:20:43.963331 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:20:43.966305 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:20:43.969491 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:20:43.972077 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:20:44.004526 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:20:44.005566 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:20:44.013347 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 17 00:20:44.015282 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:20:44.017250 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:20:44.023679 dracut-cmdline[207]: dracut-dracut-053 May 17 00:20:44.027251 dracut-cmdline[207]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:20:44.025478 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:20:44.038087 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:20:44.059420 systemd-resolved[219]: Positive Trust Anchors: May 17 00:20:44.059435 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:20:44.059462 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:20:44.061963 systemd-resolved[219]: Defaulting to hostname 'linux'. May 17 00:20:44.062983 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:20:44.065014 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:20:44.099184 kernel: SCSI subsystem initialized May 17 00:20:44.108184 kernel: Loading iSCSI transport class v2.0-870. May 17 00:20:44.118187 kernel: iscsi: registered transport (tcp) May 17 00:20:44.137416 kernel: iscsi: registered transport (qla4xxx) May 17 00:20:44.137459 kernel: QLogic iSCSI HBA Driver May 17 00:20:44.174422 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 17 00:20:44.179300 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 17 00:20:44.203155 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:20:44.203198 kernel: device-mapper: uevent: version 1.0.3 May 17 00:20:44.206187 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 17 00:20:44.244186 kernel: raid6: avx2x4 gen() 36833 MB/s May 17 00:20:44.262187 kernel: raid6: avx2x2 gen() 32478 MB/s May 17 00:20:44.280774 kernel: raid6: avx2x1 gen() 25271 MB/s May 17 00:20:44.280795 kernel: raid6: using algorithm avx2x4 gen() 36833 MB/s May 17 00:20:44.299763 kernel: raid6: .... xor() 5212 MB/s, rmw enabled May 17 00:20:44.299796 kernel: raid6: using avx2x2 recovery algorithm May 17 00:20:44.319190 kernel: xor: automatically using best checksumming function avx May 17 00:20:44.445199 kernel: Btrfs loaded, zoned=no, fsverity=no May 17 00:20:44.455355 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 17 00:20:44.460309 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:20:44.473705 systemd-udevd[397]: Using default interface naming scheme 'v255'. May 17 00:20:44.478400 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:20:44.485388 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 17 00:20:44.497814 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation May 17 00:20:44.525750 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:20:44.531269 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:20:44.590424 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:20:44.599415 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 17 00:20:44.613020 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 17 00:20:44.614435 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:20:44.616661 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:20:44.617240 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:20:44.625408 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 17 00:20:44.634110 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 17 00:20:44.660218 kernel: cryptd: max_cpu_qlen set to 1000 May 17 00:20:44.664186 kernel: scsi host0: Virtio SCSI HBA May 17 00:20:44.678209 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 May 17 00:20:44.683188 kernel: libata version 3.00 loaded. May 17 00:20:44.687003 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:20:44.694271 kernel: AVX2 version of gcm_enc/dec engaged. May 17 00:20:44.694286 kernel: AES CTR mode by8 optimization enabled May 17 00:20:44.687545 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:20:44.692844 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:20:44.693607 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:20:44.693719 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:20:44.694826 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:20:44.767642 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:20:44.799203 kernel: ahci 0000:00:1f.2: version 3.0 May 17 00:20:44.799433 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 17 00:20:44.800401 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 17 00:20:44.800617 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 17 00:20:44.807690 kernel: scsi host1: ahci May 17 00:20:44.807868 kernel: scsi host2: ahci May 17 00:20:44.808008 kernel: scsi host3: ahci May 17 00:20:44.809180 kernel: scsi host4: ahci May 17 00:20:44.809375 kernel: scsi host5: ahci May 17 00:20:44.810286 kernel: scsi host6: ahci May 17 00:20:44.810469 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 May 17 00:20:44.810481 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 May 17 00:20:44.810490 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 May 17 00:20:44.810500 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 May 17 00:20:44.810508 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 May 17 00:20:44.810517 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 May 17 00:20:44.870305 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:20:44.876313 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:20:44.892964 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:20:45.128112 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 17 00:20:45.128184 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 17 00:20:45.128196 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 17 00:20:45.128206 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 17 00:20:45.128214 kernel: ata3: SATA link down (SStatus 0 SControl 300) May 17 00:20:45.128223 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 17 00:20:45.145552 kernel: sd 0:0:0:0: Power-on or device reset occurred May 17 00:20:45.148267 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) May 17 00:20:45.148438 kernel: sd 0:0:0:0: [sda] Write Protect is off May 17 00:20:45.149191 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 May 17 00:20:45.149344 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 17 00:20:45.177745 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 00:20:45.177770 kernel: GPT:9289727 != 167739391 May 17 00:20:45.177782 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 00:20:45.179183 kernel: GPT:9289727 != 167739391 May 17 00:20:45.180466 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 00:20:45.182654 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:20:45.183867 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 17 00:20:45.213212 kernel: BTRFS: device fsid 7f88d479-6686-439c-8052-b96f0a9d77bc devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (449) May 17 00:20:45.218821 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. May 17 00:20:45.220342 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (444) May 17 00:20:45.228390 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. May 17 00:20:45.236737 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. May 17 00:20:45.238289 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. May 17 00:20:45.243689 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 17 00:20:45.256289 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 17 00:20:45.261297 disk-uuid[567]: Primary Header is updated. May 17 00:20:45.261297 disk-uuid[567]: Secondary Entries is updated. May 17 00:20:45.261297 disk-uuid[567]: Secondary Header is updated. May 17 00:20:45.266230 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:20:45.271195 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:20:46.274264 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:20:46.274903 disk-uuid[568]: The operation has completed successfully. May 17 00:20:46.317086 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:20:46.317240 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 17 00:20:46.329267 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 17 00:20:46.333594 sh[582]: Success May 17 00:20:46.346386 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 17 00:20:46.386642 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 17 00:20:46.394458 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 17 00:20:46.395294 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 17 00:20:46.420677 kernel: BTRFS info (device dm-0): first mount of filesystem 7f88d479-6686-439c-8052-b96f0a9d77bc May 17 00:20:46.420709 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 17 00:20:46.422707 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 17 00:20:46.426242 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 17 00:20:46.426258 kernel: BTRFS info (device dm-0): using free space tree May 17 00:20:46.434218 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 17 00:20:46.436463 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 17 00:20:46.437395 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 17 00:20:46.443334 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 17 00:20:46.446308 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 17 00:20:46.457408 kernel: BTRFS info (device sda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:20:46.457441 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:20:46.459885 kernel: BTRFS info (device sda6): using free space tree May 17 00:20:46.463485 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:20:46.463508 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:20:46.474737 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:20:46.477463 kernel: BTRFS info (device sda6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:20:46.481696 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 17 00:20:46.488944 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 17 00:20:46.565917 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:20:46.570742 ignition[676]: Ignition 2.19.0 May 17 00:20:46.570753 ignition[676]: Stage: fetch-offline May 17 00:20:46.573425 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:20:46.570791 ignition[676]: no configs at "/usr/lib/ignition/base.d" May 17 00:20:46.570802 ignition[676]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 17 00:20:46.571047 ignition[676]: parsed url from cmdline: "" May 17 00:20:46.577267 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:20:46.571052 ignition[676]: no config URL provided May 17 00:20:46.571058 ignition[676]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:20:46.571068 ignition[676]: no config at "/usr/lib/ignition/user.ign" May 17 00:20:46.571074 ignition[676]: failed to fetch config: resource requires networking May 17 00:20:46.571437 ignition[676]: Ignition finished successfully May 17 00:20:46.592153 systemd-networkd[768]: lo: Link UP May 17 00:20:46.592178 systemd-networkd[768]: lo: Gained carrier May 17 00:20:46.593519 systemd-networkd[768]: Enumeration completed May 17 00:20:46.593899 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:20:46.593903 systemd-networkd[768]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:20:46.595233 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:20:46.595684 systemd-networkd[768]: eth0: Link UP May 17 00:20:46.595688 systemd-networkd[768]: eth0: Gained carrier May 17 00:20:46.595694 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:20:46.596780 systemd[1]: Reached target network.target - Network. May 17 00:20:46.607280 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 17 00:20:46.620015 ignition[771]: Ignition 2.19.0 May 17 00:20:46.620745 ignition[771]: Stage: fetch May 17 00:20:46.620902 ignition[771]: no configs at "/usr/lib/ignition/base.d" May 17 00:20:46.620913 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 17 00:20:46.621000 ignition[771]: parsed url from cmdline: "" May 17 00:20:46.621004 ignition[771]: no config URL provided May 17 00:20:46.621010 ignition[771]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:20:46.621018 ignition[771]: no config at "/usr/lib/ignition/user.ign" May 17 00:20:46.621038 ignition[771]: PUT http://169.254.169.254/v1/token: attempt #1 May 17 00:20:46.621263 ignition[771]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable May 17 00:20:46.821499 ignition[771]: PUT http://169.254.169.254/v1/token: attempt #2 May 17 00:20:46.821736 ignition[771]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable May 17 00:20:47.026256 systemd-networkd[768]: eth0: DHCPv4 address 172.233.222.125/24, gateway 172.233.222.1 acquired from 23.210.200.22 May 17 00:20:47.228901 ignition[771]: PUT http://169.254.169.254/v1/token: attempt #3 May 17 00:20:47.320594 ignition[771]: PUT result: OK May 17 00:20:47.320706 ignition[771]: GET http://169.254.169.254/v1/user-data: attempt #1 May 17 00:20:47.435940 ignition[771]: GET result: OK May 17 00:20:47.436090 ignition[771]: parsing config with SHA512: a930fa0db56a86ca4d802018f759aee53f357aa3458779186064ea78a3b9d5888fd6576e5c09448c8268d33856b6d85ce412a894bb51ed85c87238488292833f May 17 00:20:47.439899 unknown[771]: fetched base config from "system" May 17 00:20:47.440180 ignition[771]: fetch: fetch complete May 17 00:20:47.439911 unknown[771]: fetched base config from "system" May 17 00:20:47.440185 ignition[771]: fetch: fetch passed May 17 00:20:47.439916 unknown[771]: fetched user config from "akamai" May 17 00:20:47.440226 ignition[771]: Ignition finished successfully May 17 00:20:47.443910 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 17 00:20:47.450293 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 17 00:20:47.468606 ignition[779]: Ignition 2.19.0 May 17 00:20:47.468621 ignition[779]: Stage: kargs May 17 00:20:47.468780 ignition[779]: no configs at "/usr/lib/ignition/base.d" May 17 00:20:47.468792 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 17 00:20:47.469766 ignition[779]: kargs: kargs passed May 17 00:20:47.469813 ignition[779]: Ignition finished successfully May 17 00:20:47.471656 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 17 00:20:47.479319 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 17 00:20:47.492870 ignition[785]: Ignition 2.19.0 May 17 00:20:47.492883 ignition[785]: Stage: disks May 17 00:20:47.493030 ignition[785]: no configs at "/usr/lib/ignition/base.d" May 17 00:20:47.495597 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 17 00:20:47.493042 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 17 00:20:47.496972 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 17 00:20:47.493701 ignition[785]: disks: disks passed May 17 00:20:47.516411 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 17 00:20:47.493736 ignition[785]: Ignition finished successfully May 17 00:20:47.517551 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:20:47.518690 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:20:47.520048 systemd[1]: Reached target basic.target - Basic System. May 17 00:20:47.526291 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 17 00:20:47.543383 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 17 00:20:47.545797 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 17 00:20:47.551266 systemd[1]: Mounting sysroot.mount - /sysroot... May 17 00:20:47.622180 kernel: EXT4-fs (sda9): mounted filesystem 278698a4-82b6-49b4-b6df-f7999ed4e35e r/w with ordered data mode. Quota mode: none. May 17 00:20:47.622608 systemd[1]: Mounted sysroot.mount - /sysroot. May 17 00:20:47.623724 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 17 00:20:47.640245 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:20:47.642643 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 17 00:20:47.644145 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 17 00:20:47.644210 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:20:47.644237 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:20:47.649956 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 17 00:20:47.656198 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (801) May 17 00:20:47.660671 kernel: BTRFS info (device sda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:20:47.660693 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:20:47.658884 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 17 00:20:47.663085 kernel: BTRFS info (device sda6): using free space tree May 17 00:20:47.666926 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:20:47.666943 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:20:47.668767 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:20:47.704650 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:20:47.709047 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory May 17 00:20:47.714042 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:20:47.718942 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:20:47.809673 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 17 00:20:47.815332 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 17 00:20:47.819619 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 17 00:20:47.823966 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 17 00:20:47.826540 kernel: BTRFS info (device sda6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:20:47.846199 ignition[914]: INFO : Ignition 2.19.0 May 17 00:20:47.846199 ignition[914]: INFO : Stage: mount May 17 00:20:47.848511 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:20:47.848511 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 17 00:20:47.848511 ignition[914]: INFO : mount: mount passed May 17 00:20:47.848511 ignition[914]: INFO : Ignition finished successfully May 17 00:20:47.850390 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 17 00:20:47.865295 systemd[1]: Starting ignition-files.service - Ignition (files)... May 17 00:20:47.866615 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 17 00:20:48.163243 systemd-networkd[768]: eth0: Gained IPv6LL May 17 00:20:48.627436 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:20:48.641513 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (925) May 17 00:20:48.641589 kernel: BTRFS info (device sda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:20:48.645262 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:20:48.645278 kernel: BTRFS info (device sda6): using free space tree May 17 00:20:48.650208 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:20:48.650224 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:20:48.654780 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:20:48.676151 ignition[942]: INFO : Ignition 2.19.0 May 17 00:20:48.676151 ignition[942]: INFO : Stage: files May 17 00:20:48.677403 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:20:48.677403 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 17 00:20:48.677403 ignition[942]: DEBUG : files: compiled without relabeling support, skipping May 17 00:20:48.679282 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:20:48.679282 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:20:48.680951 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:20:48.681761 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:20:48.681761 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:20:48.681342 unknown[942]: wrote ssh authorized keys file for user: core May 17 00:20:48.683648 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 17 00:20:48.683648 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 17 00:20:48.972296 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 17 00:20:49.245269 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 17 00:20:49.245269 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 17 00:20:49.247352 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:20:49.247352 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:20:49.247352 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:20:49.247352 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:20:49.247352 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:20:49.247352 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:20:49.247352 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:20:49.252473 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:20:49.252473 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:20:49.252473 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 00:20:49.252473 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 00:20:49.252473 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 00:20:49.252473 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 May 17 00:20:49.858250 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 17 00:20:50.152760 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 00:20:50.152760 ignition[942]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 17 00:20:50.155349 ignition[942]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:20:50.155349 ignition[942]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:20:50.155349 ignition[942]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 17 00:20:50.155349 ignition[942]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 17 00:20:50.155349 ignition[942]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 17 00:20:50.155349 ignition[942]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 17 00:20:50.155349 ignition[942]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 17 00:20:50.155349 ignition[942]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" May 17 00:20:50.155349 ignition[942]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:20:50.155349 ignition[942]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:20:50.155349 ignition[942]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:20:50.155349 ignition[942]: INFO : files: files passed May 17 00:20:50.155349 ignition[942]: INFO : Ignition finished successfully May 17 00:20:50.158300 systemd[1]: Finished ignition-files.service - Ignition (files). May 17 00:20:50.187304 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 17 00:20:50.191506 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 17 00:20:50.193836 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:20:50.203321 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 17 00:20:50.212736 initrd-setup-root-after-ignition[971]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:20:50.212736 initrd-setup-root-after-ignition[971]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 17 00:20:50.215058 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:20:50.216974 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:20:50.218088 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 17 00:20:50.223296 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 17 00:20:50.256797 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:20:50.256939 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 17 00:20:50.258151 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 17 00:20:50.259376 systemd[1]: Reached target initrd.target - Initrd Default Target. May 17 00:20:50.260553 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 17 00:20:50.265317 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 17 00:20:50.279223 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:20:50.285296 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 17 00:20:50.294628 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 17 00:20:50.295216 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:20:50.295847 systemd[1]: Stopped target timers.target - Timer Units. May 17 00:20:50.297233 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:20:50.297365 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:20:50.298780 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 17 00:20:50.299531 systemd[1]: Stopped target basic.target - Basic System. May 17 00:20:50.300507 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 17 00:20:50.301877 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:20:50.303064 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 17 00:20:50.304133 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 17 00:20:50.305334 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:20:50.306543 systemd[1]: Stopped target sysinit.target - System Initialization. May 17 00:20:50.307705 systemd[1]: Stopped target local-fs.target - Local File Systems. May 17 00:20:50.308832 systemd[1]: Stopped target swap.target - Swaps. May 17 00:20:50.309968 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:20:50.310050 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 17 00:20:50.311528 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 17 00:20:50.312370 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:20:50.313424 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 17 00:20:50.315260 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:20:50.315873 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:20:50.315954 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 17 00:20:50.317368 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:20:50.317458 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:20:50.318113 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:20:50.318209 systemd[1]: Stopped ignition-files.service - Ignition (files). May 17 00:20:50.329527 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 17 00:20:50.332361 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 17 00:20:50.332950 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:20:50.333098 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:20:50.335118 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:20:50.335291 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:20:50.342842 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:20:50.343413 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 17 00:20:50.352835 ignition[995]: INFO : Ignition 2.19.0 May 17 00:20:50.354859 ignition[995]: INFO : Stage: umount May 17 00:20:50.354859 ignition[995]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:20:50.354859 ignition[995]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 17 00:20:50.354859 ignition[995]: INFO : umount: umount passed May 17 00:20:50.354859 ignition[995]: INFO : Ignition finished successfully May 17 00:20:50.359824 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:20:50.359941 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 17 00:20:50.360594 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:20:50.360638 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 17 00:20:50.362703 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:20:50.362752 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 17 00:20:50.363385 systemd[1]: ignition-fetch.service: Deactivated successfully. May 17 00:20:50.363433 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 17 00:20:50.364005 systemd[1]: Stopped target network.target - Network. May 17 00:20:50.364444 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:20:50.364684 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:20:50.365213 systemd[1]: Stopped target paths.target - Path Units. May 17 00:20:50.365944 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:20:50.366571 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:20:50.367224 systemd[1]: Stopped target slices.target - Slice Units. May 17 00:20:50.367625 systemd[1]: Stopped target sockets.target - Socket Units. May 17 00:20:50.368095 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:20:50.368137 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:20:50.368858 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:20:50.368896 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:20:50.371005 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:20:50.371055 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 17 00:20:50.373813 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 17 00:20:50.373870 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 17 00:20:50.395684 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 17 00:20:50.396229 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 17 00:20:50.399889 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:20:50.400320 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:20:50.400411 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 17 00:20:50.401657 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:20:50.401729 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 17 00:20:50.406267 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:20:50.406410 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 17 00:20:50.409233 systemd-networkd[768]: eth0: DHCPv6 lease lost May 17 00:20:50.409594 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 17 00:20:50.409656 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:20:50.411320 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:20:50.411437 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 17 00:20:50.412957 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:20:50.413012 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 17 00:20:50.422302 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 17 00:20:50.423625 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:20:50.423692 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:20:50.424989 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:20:50.425038 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 00:20:50.426326 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:20:50.426380 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 17 00:20:50.427525 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:20:50.439061 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:20:50.439240 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 17 00:20:50.441642 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:20:50.441831 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:20:50.443802 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:20:50.443886 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 17 00:20:50.444592 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:20:50.444633 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:20:50.445758 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:20:50.445807 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 17 00:20:50.447370 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:20:50.447417 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 17 00:20:50.448503 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:20:50.448554 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:20:50.455387 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 17 00:20:50.456479 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:20:50.456531 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:20:50.458273 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 17 00:20:50.458317 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:20:50.459010 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:20:50.459059 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:20:50.459691 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:20:50.459737 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:20:50.462295 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:20:50.462406 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 17 00:20:50.463787 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 17 00:20:50.469312 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 17 00:20:50.475442 systemd[1]: Switching root. May 17 00:20:50.506185 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). May 17 00:20:50.506240 systemd-journald[177]: Journal stopped May 17 00:20:43.879891 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri May 16 22:44:56 -00 2025 May 17 00:20:43.879911 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:20:43.879919 kernel: BIOS-provided physical RAM map: May 17 00:20:43.879925 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable May 17 00:20:43.879930 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved May 17 00:20:43.879938 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 17 00:20:43.879944 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable May 17 00:20:43.879949 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved May 17 00:20:43.879955 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 17 00:20:43.879960 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 17 00:20:43.879966 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 17 00:20:43.879971 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 17 00:20:43.879976 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable May 17 00:20:43.879984 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 17 00:20:43.879991 kernel: NX (Execute Disable) protection: active May 17 00:20:43.879996 kernel: APIC: Static calls initialized May 17 00:20:43.880002 kernel: SMBIOS 2.8 present. May 17 00:20:43.880008 kernel: DMI: Linode Compute Instance, BIOS Not Specified May 17 00:20:43.880013 kernel: Hypervisor detected: KVM May 17 00:20:43.880021 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 17 00:20:43.880027 kernel: kvm-clock: using sched offset of 4616673316 cycles May 17 00:20:43.880032 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 17 00:20:43.880038 kernel: tsc: Detected 2000.000 MHz processor May 17 00:20:43.880045 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 17 00:20:43.880051 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 17 00:20:43.880057 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 May 17 00:20:43.880063 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 17 00:20:43.880069 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 17 00:20:43.880077 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 May 17 00:20:43.880082 kernel: Using GB pages for direct mapping May 17 00:20:43.880088 kernel: ACPI: Early table checksum verification disabled May 17 00:20:43.880094 kernel: ACPI: RSDP 0x00000000000F51B0 000014 (v00 BOCHS ) May 17 00:20:43.880100 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:20:43.880106 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:20:43.880111 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:20:43.880117 kernel: ACPI: FACS 0x000000007FFE0000 000040 May 17 00:20:43.880123 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:20:43.880131 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:20:43.880137 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:20:43.880143 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:20:43.880152 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] May 17 00:20:43.880158 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] May 17 00:20:43.882437 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] May 17 00:20:43.882453 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] May 17 00:20:43.882462 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] May 17 00:20:43.882469 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] May 17 00:20:43.882475 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] May 17 00:20:43.882481 kernel: No NUMA configuration found May 17 00:20:43.882488 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] May 17 00:20:43.882494 kernel: NODE_DATA(0) allocated [mem 0x17fff8000-0x17fffdfff] May 17 00:20:43.882500 kernel: Zone ranges: May 17 00:20:43.882520 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 17 00:20:43.882527 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 17 00:20:43.882533 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] May 17 00:20:43.882539 kernel: Movable zone start for each node May 17 00:20:43.882545 kernel: Early memory node ranges May 17 00:20:43.882551 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 17 00:20:43.882557 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] May 17 00:20:43.882563 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] May 17 00:20:43.882569 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] May 17 00:20:43.882576 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 17 00:20:43.882584 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 17 00:20:43.882590 kernel: On node 0, zone Normal: 35 pages in unavailable ranges May 17 00:20:43.882596 kernel: ACPI: PM-Timer IO Port: 0x608 May 17 00:20:43.882602 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 17 00:20:43.882608 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 17 00:20:43.882615 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 17 00:20:43.882621 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 17 00:20:43.882627 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 17 00:20:43.882633 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 17 00:20:43.882641 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 17 00:20:43.882648 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 17 00:20:43.882654 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 17 00:20:43.882660 kernel: TSC deadline timer available May 17 00:20:43.882666 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 17 00:20:43.882672 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 17 00:20:43.882678 kernel: kvm-guest: KVM setup pv remote TLB flush May 17 00:20:43.882684 kernel: kvm-guest: setup PV sched yield May 17 00:20:43.882690 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 17 00:20:43.882699 kernel: Booting paravirtualized kernel on KVM May 17 00:20:43.882706 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 17 00:20:43.882712 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 17 00:20:43.882718 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 May 17 00:20:43.882724 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 May 17 00:20:43.882730 kernel: pcpu-alloc: [0] 0 1 May 17 00:20:43.882736 kernel: kvm-guest: PV spinlocks enabled May 17 00:20:43.882743 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 17 00:20:43.882750 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:20:43.882759 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:20:43.882765 kernel: random: crng init done May 17 00:20:43.882771 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 17 00:20:43.882777 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:20:43.882783 kernel: Fallback order for Node 0: 0 May 17 00:20:43.882789 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 May 17 00:20:43.882795 kernel: Policy zone: Normal May 17 00:20:43.882801 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:20:43.882809 kernel: software IO TLB: area num 2. May 17 00:20:43.882816 kernel: Memory: 3966204K/4193772K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42872K init, 2320K bss, 227308K reserved, 0K cma-reserved) May 17 00:20:43.882822 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 17 00:20:43.882828 kernel: ftrace: allocating 37948 entries in 149 pages May 17 00:20:43.882834 kernel: ftrace: allocated 149 pages with 4 groups May 17 00:20:43.882840 kernel: Dynamic Preempt: voluntary May 17 00:20:43.882847 kernel: rcu: Preemptible hierarchical RCU implementation. May 17 00:20:43.882853 kernel: rcu: RCU event tracing is enabled. May 17 00:20:43.882860 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 17 00:20:43.882868 kernel: Trampoline variant of Tasks RCU enabled. May 17 00:20:43.882875 kernel: Rude variant of Tasks RCU enabled. May 17 00:20:43.882881 kernel: Tracing variant of Tasks RCU enabled. May 17 00:20:43.882887 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:20:43.882893 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 17 00:20:43.882899 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 17 00:20:43.882905 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 17 00:20:43.882911 kernel: Console: colour VGA+ 80x25 May 17 00:20:43.882917 kernel: printk: console [tty0] enabled May 17 00:20:43.882926 kernel: printk: console [ttyS0] enabled May 17 00:20:43.882932 kernel: ACPI: Core revision 20230628 May 17 00:20:43.882938 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 17 00:20:43.882944 kernel: APIC: Switch to symmetric I/O mode setup May 17 00:20:43.882958 kernel: x2apic enabled May 17 00:20:43.882966 kernel: APIC: Switched APIC routing to: physical x2apic May 17 00:20:43.882973 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 17 00:20:43.882979 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 17 00:20:43.882986 kernel: kvm-guest: setup PV IPIs May 17 00:20:43.882992 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 17 00:20:43.882999 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 17 00:20:43.883005 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) May 17 00:20:43.883014 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 17 00:20:43.883020 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 17 00:20:43.883027 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 17 00:20:43.883033 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 17 00:20:43.883040 kernel: Spectre V2 : Mitigation: Retpolines May 17 00:20:43.883048 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 17 00:20:43.883055 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls May 17 00:20:43.883061 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 17 00:20:43.883068 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 17 00:20:43.883075 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 17 00:20:43.883082 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 17 00:20:43.883088 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 17 00:20:43.883095 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 17 00:20:43.883103 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 17 00:20:43.883110 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 17 00:20:43.883116 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' May 17 00:20:43.883122 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 17 00:20:43.883129 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 May 17 00:20:43.883135 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. May 17 00:20:43.883142 kernel: Freeing SMP alternatives memory: 32K May 17 00:20:43.883148 kernel: pid_max: default: 32768 minimum: 301 May 17 00:20:43.883155 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 17 00:20:43.885188 kernel: landlock: Up and running. May 17 00:20:43.885197 kernel: SELinux: Initializing. May 17 00:20:43.885204 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:20:43.885211 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:20:43.885218 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) May 17 00:20:43.885224 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:20:43.885231 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:20:43.885237 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:20:43.885244 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 17 00:20:43.885254 kernel: ... version: 0 May 17 00:20:43.885260 kernel: ... bit width: 48 May 17 00:20:43.885267 kernel: ... generic registers: 6 May 17 00:20:43.885273 kernel: ... value mask: 0000ffffffffffff May 17 00:20:43.885279 kernel: ... max period: 00007fffffffffff May 17 00:20:43.885286 kernel: ... fixed-purpose events: 0 May 17 00:20:43.885292 kernel: ... event mask: 000000000000003f May 17 00:20:43.885298 kernel: signal: max sigframe size: 3376 May 17 00:20:43.885305 kernel: rcu: Hierarchical SRCU implementation. May 17 00:20:43.885314 kernel: rcu: Max phase no-delay instances is 400. May 17 00:20:43.885320 kernel: smp: Bringing up secondary CPUs ... May 17 00:20:43.885327 kernel: smpboot: x86: Booting SMP configuration: May 17 00:20:43.885333 kernel: .... node #0, CPUs: #1 May 17 00:20:43.885339 kernel: smp: Brought up 1 node, 2 CPUs May 17 00:20:43.885346 kernel: smpboot: Max logical packages: 1 May 17 00:20:43.885352 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) May 17 00:20:43.885358 kernel: devtmpfs: initialized May 17 00:20:43.885365 kernel: x86/mm: Memory block size: 128MB May 17 00:20:43.885373 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:20:43.885380 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 17 00:20:43.885386 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:20:43.885393 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:20:43.885399 kernel: audit: initializing netlink subsys (disabled) May 17 00:20:43.885406 kernel: audit: type=2000 audit(1747441243.410:1): state=initialized audit_enabled=0 res=1 May 17 00:20:43.885412 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:20:43.885418 kernel: thermal_sys: Registered thermal governor 'user_space' May 17 00:20:43.885425 kernel: cpuidle: using governor menu May 17 00:20:43.885433 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:20:43.885440 kernel: dca service started, version 1.12.1 May 17 00:20:43.885446 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 17 00:20:43.885453 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 17 00:20:43.885459 kernel: PCI: Using configuration type 1 for base access May 17 00:20:43.885466 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 17 00:20:43.885472 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:20:43.885479 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 17 00:20:43.885485 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:20:43.885494 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 17 00:20:43.885500 kernel: ACPI: Added _OSI(Module Device) May 17 00:20:43.885507 kernel: ACPI: Added _OSI(Processor Device) May 17 00:20:43.885513 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:20:43.885519 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:20:43.885526 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 17 00:20:43.885532 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 17 00:20:43.885538 kernel: ACPI: Interpreter enabled May 17 00:20:43.885545 kernel: ACPI: PM: (supports S0 S3 S5) May 17 00:20:43.885553 kernel: ACPI: Using IOAPIC for interrupt routing May 17 00:20:43.885560 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 17 00:20:43.885566 kernel: PCI: Using E820 reservations for host bridge windows May 17 00:20:43.885572 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 17 00:20:43.885579 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 17 00:20:43.885752 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:20:43.885876 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 17 00:20:43.885992 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 17 00:20:43.886002 kernel: PCI host bridge to bus 0000:00 May 17 00:20:43.886117 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 17 00:20:43.886260 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 17 00:20:43.886364 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 17 00:20:43.886464 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] May 17 00:20:43.886564 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 17 00:20:43.886664 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] May 17 00:20:43.886771 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 17 00:20:43.886906 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 17 00:20:43.887028 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 17 00:20:43.887139 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] May 17 00:20:43.887276 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] May 17 00:20:43.887388 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] May 17 00:20:43.887502 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 17 00:20:43.887621 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 May 17 00:20:43.887733 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] May 17 00:20:43.887843 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] May 17 00:20:43.887952 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] May 17 00:20:43.888071 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 May 17 00:20:43.891241 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] May 17 00:20:43.891376 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] May 17 00:20:43.891489 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] May 17 00:20:43.891601 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] May 17 00:20:43.891719 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 17 00:20:43.891845 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 17 00:20:43.891966 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 17 00:20:43.892083 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] May 17 00:20:43.892209 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] May 17 00:20:43.892329 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 17 00:20:43.892437 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 17 00:20:43.892447 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 17 00:20:43.892454 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 17 00:20:43.892461 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 17 00:20:43.892467 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 17 00:20:43.892477 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 17 00:20:43.892484 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 17 00:20:43.892490 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 17 00:20:43.892497 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 17 00:20:43.892503 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 17 00:20:43.892510 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 17 00:20:43.892516 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 17 00:20:43.892522 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 17 00:20:43.892529 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 17 00:20:43.892538 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 17 00:20:43.892544 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 17 00:20:43.892551 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 17 00:20:43.892557 kernel: iommu: Default domain type: Translated May 17 00:20:43.892564 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 17 00:20:43.892570 kernel: PCI: Using ACPI for IRQ routing May 17 00:20:43.892576 kernel: PCI: pci_cache_line_size set to 64 bytes May 17 00:20:43.892583 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] May 17 00:20:43.892589 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] May 17 00:20:43.892698 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 17 00:20:43.892806 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 17 00:20:43.892913 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 17 00:20:43.892922 kernel: vgaarb: loaded May 17 00:20:43.892929 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 17 00:20:43.892935 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 17 00:20:43.892942 kernel: clocksource: Switched to clocksource kvm-clock May 17 00:20:43.892948 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:20:43.892958 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:20:43.892964 kernel: pnp: PnP ACPI init May 17 00:20:43.893088 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved May 17 00:20:43.893098 kernel: pnp: PnP ACPI: found 5 devices May 17 00:20:43.893105 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 17 00:20:43.893112 kernel: NET: Registered PF_INET protocol family May 17 00:20:43.893118 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 00:20:43.893125 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 17 00:20:43.893134 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:20:43.893141 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 17 00:20:43.893147 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 17 00:20:43.893154 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 17 00:20:43.895554 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:20:43.895563 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:20:43.895570 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:20:43.895576 kernel: NET: Registered PF_XDP protocol family May 17 00:20:43.895690 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 17 00:20:43.895799 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 17 00:20:43.895899 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 17 00:20:43.896000 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] May 17 00:20:43.896099 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 17 00:20:43.896229 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] May 17 00:20:43.896240 kernel: PCI: CLS 0 bytes, default 64 May 17 00:20:43.896247 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 17 00:20:43.896254 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) May 17 00:20:43.896264 kernel: Initialise system trusted keyrings May 17 00:20:43.896271 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 17 00:20:43.896278 kernel: Key type asymmetric registered May 17 00:20:43.896284 kernel: Asymmetric key parser 'x509' registered May 17 00:20:43.896290 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 17 00:20:43.896297 kernel: io scheduler mq-deadline registered May 17 00:20:43.896303 kernel: io scheduler kyber registered May 17 00:20:43.896310 kernel: io scheduler bfq registered May 17 00:20:43.896316 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 17 00:20:43.896324 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 17 00:20:43.896333 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 17 00:20:43.896339 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:20:43.896345 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 17 00:20:43.896352 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 17 00:20:43.896359 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 17 00:20:43.896365 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 17 00:20:43.896371 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 17 00:20:43.896486 kernel: rtc_cmos 00:03: RTC can wake from S4 May 17 00:20:43.896596 kernel: rtc_cmos 00:03: registered as rtc0 May 17 00:20:43.896706 kernel: rtc_cmos 00:03: setting system clock to 2025-05-17T00:20:43 UTC (1747441243) May 17 00:20:43.896807 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 17 00:20:43.896816 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 17 00:20:43.896823 kernel: NET: Registered PF_INET6 protocol family May 17 00:20:43.896829 kernel: Segment Routing with IPv6 May 17 00:20:43.896836 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:20:43.896842 kernel: NET: Registered PF_PACKET protocol family May 17 00:20:43.896852 kernel: Key type dns_resolver registered May 17 00:20:43.896858 kernel: IPI shorthand broadcast: enabled May 17 00:20:43.896865 kernel: sched_clock: Marking stable (683003570, 207176012)->(949053935, -58874353) May 17 00:20:43.896871 kernel: registered taskstats version 1 May 17 00:20:43.896878 kernel: Loading compiled-in X.509 certificates May 17 00:20:43.896884 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 85b8d1234ceca483cb3defc2030d93f7792663c9' May 17 00:20:43.896890 kernel: Key type .fscrypt registered May 17 00:20:43.896897 kernel: Key type fscrypt-provisioning registered May 17 00:20:43.896903 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:20:43.896912 kernel: ima: Allocated hash algorithm: sha1 May 17 00:20:43.896918 kernel: ima: No architecture policies found May 17 00:20:43.896925 kernel: clk: Disabling unused clocks May 17 00:20:43.896931 kernel: Freeing unused kernel image (initmem) memory: 42872K May 17 00:20:43.896938 kernel: Write protecting the kernel read-only data: 36864k May 17 00:20:43.896944 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 17 00:20:43.896950 kernel: Run /init as init process May 17 00:20:43.896957 kernel: with arguments: May 17 00:20:43.896963 kernel: /init May 17 00:20:43.896972 kernel: with environment: May 17 00:20:43.896978 kernel: HOME=/ May 17 00:20:43.896985 kernel: TERM=linux May 17 00:20:43.896991 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:20:43.896999 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:20:43.897008 systemd[1]: Detected virtualization kvm. May 17 00:20:43.897015 systemd[1]: Detected architecture x86-64. May 17 00:20:43.897021 systemd[1]: Running in initrd. May 17 00:20:43.897030 systemd[1]: No hostname configured, using default hostname. May 17 00:20:43.897037 systemd[1]: Hostname set to . May 17 00:20:43.897044 systemd[1]: Initializing machine ID from random generator. May 17 00:20:43.897051 systemd[1]: Queued start job for default target initrd.target. May 17 00:20:43.897058 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:20:43.897078 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:20:43.897090 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 17 00:20:43.897097 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:20:43.897104 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 17 00:20:43.897111 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 17 00:20:43.897120 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 17 00:20:43.897127 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 17 00:20:43.897137 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:20:43.897144 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:20:43.897151 systemd[1]: Reached target paths.target - Path Units. May 17 00:20:43.897158 systemd[1]: Reached target slices.target - Slice Units. May 17 00:20:43.898220 systemd[1]: Reached target swap.target - Swaps. May 17 00:20:43.898229 systemd[1]: Reached target timers.target - Timer Units. May 17 00:20:43.898236 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:20:43.898244 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:20:43.898251 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 17 00:20:43.898262 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 17 00:20:43.898269 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:20:43.898276 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:20:43.898283 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:20:43.898290 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:20:43.898297 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 17 00:20:43.898304 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:20:43.898311 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 17 00:20:43.898318 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:20:43.898328 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:20:43.898335 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:20:43.898362 systemd-journald[177]: Collecting audit messages is disabled. May 17 00:20:43.898379 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:20:43.898390 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 17 00:20:43.898397 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:20:43.898407 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:20:43.898418 systemd-journald[177]: Journal started May 17 00:20:43.898433 systemd-journald[177]: Runtime Journal (/run/log/journal/93883c70eba4463a940dbe45a886250a) is 8.0M, max 78.3M, 70.3M free. May 17 00:20:43.897646 systemd-modules-load[178]: Inserted module 'overlay' May 17 00:20:43.950846 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:20:43.950870 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:20:43.950881 kernel: Bridge firewalling registered May 17 00:20:43.950890 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:20:43.921170 systemd-modules-load[178]: Inserted module 'br_netfilter' May 17 00:20:43.956467 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:20:43.957196 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:20:43.963331 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:20:43.966305 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:20:43.969491 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:20:43.972077 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:20:44.004526 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:20:44.005566 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:20:44.013347 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 17 00:20:44.015282 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:20:44.017250 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:20:44.023679 dracut-cmdline[207]: dracut-dracut-053 May 17 00:20:44.027251 dracut-cmdline[207]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:20:44.025478 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:20:44.038087 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:20:44.059420 systemd-resolved[219]: Positive Trust Anchors: May 17 00:20:44.059435 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:20:44.059462 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:20:44.061963 systemd-resolved[219]: Defaulting to hostname 'linux'. May 17 00:20:44.062983 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:20:44.065014 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:20:44.099184 kernel: SCSI subsystem initialized May 17 00:20:44.108184 kernel: Loading iSCSI transport class v2.0-870. May 17 00:20:44.118187 kernel: iscsi: registered transport (tcp) May 17 00:20:44.137416 kernel: iscsi: registered transport (qla4xxx) May 17 00:20:44.137459 kernel: QLogic iSCSI HBA Driver May 17 00:20:44.174422 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 17 00:20:44.179300 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 17 00:20:44.203155 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:20:44.203198 kernel: device-mapper: uevent: version 1.0.3 May 17 00:20:44.206187 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 17 00:20:44.244186 kernel: raid6: avx2x4 gen() 36833 MB/s May 17 00:20:44.262187 kernel: raid6: avx2x2 gen() 32478 MB/s May 17 00:20:44.280774 kernel: raid6: avx2x1 gen() 25271 MB/s May 17 00:20:44.280795 kernel: raid6: using algorithm avx2x4 gen() 36833 MB/s May 17 00:20:44.299763 kernel: raid6: .... xor() 5212 MB/s, rmw enabled May 17 00:20:44.299796 kernel: raid6: using avx2x2 recovery algorithm May 17 00:20:44.319190 kernel: xor: automatically using best checksumming function avx May 17 00:20:44.445199 kernel: Btrfs loaded, zoned=no, fsverity=no May 17 00:20:44.455355 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 17 00:20:44.460309 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:20:44.473705 systemd-udevd[397]: Using default interface naming scheme 'v255'. May 17 00:20:44.478400 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:20:44.485388 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 17 00:20:44.497814 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation May 17 00:20:44.525750 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:20:44.531269 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:20:44.590424 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:20:44.599415 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 17 00:20:44.613020 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 17 00:20:44.614435 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:20:44.616661 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:20:44.617240 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:20:44.625408 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 17 00:20:44.634110 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 17 00:20:44.660218 kernel: cryptd: max_cpu_qlen set to 1000 May 17 00:20:44.664186 kernel: scsi host0: Virtio SCSI HBA May 17 00:20:44.678209 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 May 17 00:20:44.683188 kernel: libata version 3.00 loaded. May 17 00:20:44.687003 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:20:44.694271 kernel: AVX2 version of gcm_enc/dec engaged. May 17 00:20:44.694286 kernel: AES CTR mode by8 optimization enabled May 17 00:20:44.687545 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:20:44.692844 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:20:44.693607 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:20:44.693719 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:20:44.694826 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:20:44.767642 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:20:44.799203 kernel: ahci 0000:00:1f.2: version 3.0 May 17 00:20:44.799433 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 17 00:20:44.800401 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 17 00:20:44.800617 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 17 00:20:44.807690 kernel: scsi host1: ahci May 17 00:20:44.807868 kernel: scsi host2: ahci May 17 00:20:44.808008 kernel: scsi host3: ahci May 17 00:20:44.809180 kernel: scsi host4: ahci May 17 00:20:44.809375 kernel: scsi host5: ahci May 17 00:20:44.810286 kernel: scsi host6: ahci May 17 00:20:44.810469 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 May 17 00:20:44.810481 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 May 17 00:20:44.810490 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 May 17 00:20:44.810500 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 May 17 00:20:44.810508 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 May 17 00:20:44.810517 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 May 17 00:20:44.870305 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:20:44.876313 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:20:44.892964 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:20:45.128112 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 17 00:20:45.128184 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 17 00:20:45.128196 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 17 00:20:45.128206 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 17 00:20:45.128214 kernel: ata3: SATA link down (SStatus 0 SControl 300) May 17 00:20:45.128223 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 17 00:20:45.145552 kernel: sd 0:0:0:0: Power-on or device reset occurred May 17 00:20:45.148267 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) May 17 00:20:45.148438 kernel: sd 0:0:0:0: [sda] Write Protect is off May 17 00:20:45.149191 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 May 17 00:20:45.149344 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 17 00:20:45.177745 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 00:20:45.177770 kernel: GPT:9289727 != 167739391 May 17 00:20:45.177782 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 00:20:45.179183 kernel: GPT:9289727 != 167739391 May 17 00:20:45.180466 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 00:20:45.182654 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:20:45.183867 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 17 00:20:45.213212 kernel: BTRFS: device fsid 7f88d479-6686-439c-8052-b96f0a9d77bc devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (449) May 17 00:20:45.218821 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. May 17 00:20:45.220342 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (444) May 17 00:20:45.228390 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. May 17 00:20:45.236737 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. May 17 00:20:45.238289 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. May 17 00:20:45.243689 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 17 00:20:45.256289 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 17 00:20:45.261297 disk-uuid[567]: Primary Header is updated. May 17 00:20:45.261297 disk-uuid[567]: Secondary Entries is updated. May 17 00:20:45.261297 disk-uuid[567]: Secondary Header is updated. May 17 00:20:45.266230 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:20:45.271195 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:20:46.274264 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:20:46.274903 disk-uuid[568]: The operation has completed successfully. May 17 00:20:46.317086 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:20:46.317240 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 17 00:20:46.329267 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 17 00:20:46.333594 sh[582]: Success May 17 00:20:46.346386 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 17 00:20:46.386642 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 17 00:20:46.394458 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 17 00:20:46.395294 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 17 00:20:46.420677 kernel: BTRFS info (device dm-0): first mount of filesystem 7f88d479-6686-439c-8052-b96f0a9d77bc May 17 00:20:46.420709 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 17 00:20:46.422707 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 17 00:20:46.426242 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 17 00:20:46.426258 kernel: BTRFS info (device dm-0): using free space tree May 17 00:20:46.434218 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 17 00:20:46.436463 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 17 00:20:46.437395 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 17 00:20:46.443334 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 17 00:20:46.446308 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 17 00:20:46.457408 kernel: BTRFS info (device sda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:20:46.457441 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:20:46.459885 kernel: BTRFS info (device sda6): using free space tree May 17 00:20:46.463485 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:20:46.463508 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:20:46.474737 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:20:46.477463 kernel: BTRFS info (device sda6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:20:46.481696 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 17 00:20:46.488944 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 17 00:20:46.565917 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:20:46.570742 ignition[676]: Ignition 2.19.0 May 17 00:20:46.570753 ignition[676]: Stage: fetch-offline May 17 00:20:46.573425 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:20:46.570791 ignition[676]: no configs at "/usr/lib/ignition/base.d" May 17 00:20:46.570802 ignition[676]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 17 00:20:46.571047 ignition[676]: parsed url from cmdline: "" May 17 00:20:46.577267 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:20:46.571052 ignition[676]: no config URL provided May 17 00:20:46.571058 ignition[676]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:20:46.571068 ignition[676]: no config at "/usr/lib/ignition/user.ign" May 17 00:20:46.571074 ignition[676]: failed to fetch config: resource requires networking May 17 00:20:46.571437 ignition[676]: Ignition finished successfully May 17 00:20:46.592153 systemd-networkd[768]: lo: Link UP May 17 00:20:46.592178 systemd-networkd[768]: lo: Gained carrier May 17 00:20:46.593519 systemd-networkd[768]: Enumeration completed May 17 00:20:46.593899 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:20:46.593903 systemd-networkd[768]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:20:46.595233 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:20:46.595684 systemd-networkd[768]: eth0: Link UP May 17 00:20:46.595688 systemd-networkd[768]: eth0: Gained carrier May 17 00:20:46.595694 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:20:46.596780 systemd[1]: Reached target network.target - Network. May 17 00:20:46.607280 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 17 00:20:46.620015 ignition[771]: Ignition 2.19.0 May 17 00:20:46.620745 ignition[771]: Stage: fetch May 17 00:20:46.620902 ignition[771]: no configs at "/usr/lib/ignition/base.d" May 17 00:20:46.620913 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 17 00:20:46.621000 ignition[771]: parsed url from cmdline: "" May 17 00:20:46.621004 ignition[771]: no config URL provided May 17 00:20:46.621010 ignition[771]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:20:46.621018 ignition[771]: no config at "/usr/lib/ignition/user.ign" May 17 00:20:46.621038 ignition[771]: PUT http://169.254.169.254/v1/token: attempt #1 May 17 00:20:46.621263 ignition[771]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable May 17 00:20:46.821499 ignition[771]: PUT http://169.254.169.254/v1/token: attempt #2 May 17 00:20:46.821736 ignition[771]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable May 17 00:20:47.026256 systemd-networkd[768]: eth0: DHCPv4 address 172.233.222.125/24, gateway 172.233.222.1 acquired from 23.210.200.22 May 17 00:20:47.228901 ignition[771]: PUT http://169.254.169.254/v1/token: attempt #3 May 17 00:20:47.320594 ignition[771]: PUT result: OK May 17 00:20:47.320706 ignition[771]: GET http://169.254.169.254/v1/user-data: attempt #1 May 17 00:20:47.435940 ignition[771]: GET result: OK May 17 00:20:47.436090 ignition[771]: parsing config with SHA512: a930fa0db56a86ca4d802018f759aee53f357aa3458779186064ea78a3b9d5888fd6576e5c09448c8268d33856b6d85ce412a894bb51ed85c87238488292833f May 17 00:20:47.439899 unknown[771]: fetched base config from "system" May 17 00:20:47.440180 ignition[771]: fetch: fetch complete May 17 00:20:47.439911 unknown[771]: fetched base config from "system" May 17 00:20:47.440185 ignition[771]: fetch: fetch passed May 17 00:20:47.439916 unknown[771]: fetched user config from "akamai" May 17 00:20:47.440226 ignition[771]: Ignition finished successfully May 17 00:20:47.443910 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 17 00:20:47.450293 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 17 00:20:47.468606 ignition[779]: Ignition 2.19.0 May 17 00:20:47.468621 ignition[779]: Stage: kargs May 17 00:20:47.468780 ignition[779]: no configs at "/usr/lib/ignition/base.d" May 17 00:20:47.468792 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 17 00:20:47.469766 ignition[779]: kargs: kargs passed May 17 00:20:47.469813 ignition[779]: Ignition finished successfully May 17 00:20:47.471656 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 17 00:20:47.479319 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 17 00:20:47.492870 ignition[785]: Ignition 2.19.0 May 17 00:20:47.492883 ignition[785]: Stage: disks May 17 00:20:47.493030 ignition[785]: no configs at "/usr/lib/ignition/base.d" May 17 00:20:47.495597 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 17 00:20:47.493042 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 17 00:20:47.496972 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 17 00:20:47.493701 ignition[785]: disks: disks passed May 17 00:20:47.516411 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 17 00:20:47.493736 ignition[785]: Ignition finished successfully May 17 00:20:47.517551 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:20:47.518690 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:20:47.520048 systemd[1]: Reached target basic.target - Basic System. May 17 00:20:47.526291 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 17 00:20:47.543383 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 17 00:20:47.545797 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 17 00:20:47.551266 systemd[1]: Mounting sysroot.mount - /sysroot... May 17 00:20:47.622180 kernel: EXT4-fs (sda9): mounted filesystem 278698a4-82b6-49b4-b6df-f7999ed4e35e r/w with ordered data mode. Quota mode: none. May 17 00:20:47.622608 systemd[1]: Mounted sysroot.mount - /sysroot. May 17 00:20:47.623724 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 17 00:20:47.640245 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:20:47.642643 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 17 00:20:47.644145 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 17 00:20:47.644210 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:20:47.644237 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:20:47.649956 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 17 00:20:47.656198 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (801) May 17 00:20:47.660671 kernel: BTRFS info (device sda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:20:47.660693 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:20:47.658884 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 17 00:20:47.663085 kernel: BTRFS info (device sda6): using free space tree May 17 00:20:47.666926 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:20:47.666943 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:20:47.668767 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:20:47.704650 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:20:47.709047 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory May 17 00:20:47.714042 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:20:47.718942 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:20:47.809673 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 17 00:20:47.815332 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 17 00:20:47.819619 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 17 00:20:47.823966 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 17 00:20:47.826540 kernel: BTRFS info (device sda6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:20:47.846199 ignition[914]: INFO : Ignition 2.19.0 May 17 00:20:47.846199 ignition[914]: INFO : Stage: mount May 17 00:20:47.848511 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:20:47.848511 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 17 00:20:47.848511 ignition[914]: INFO : mount: mount passed May 17 00:20:47.848511 ignition[914]: INFO : Ignition finished successfully May 17 00:20:47.850390 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 17 00:20:47.865295 systemd[1]: Starting ignition-files.service - Ignition (files)... May 17 00:20:47.866615 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 17 00:20:48.163243 systemd-networkd[768]: eth0: Gained IPv6LL May 17 00:20:48.627436 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:20:48.641513 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (925) May 17 00:20:48.641589 kernel: BTRFS info (device sda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:20:48.645262 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:20:48.645278 kernel: BTRFS info (device sda6): using free space tree May 17 00:20:48.650208 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:20:48.650224 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:20:48.654780 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:20:48.676151 ignition[942]: INFO : Ignition 2.19.0 May 17 00:20:48.676151 ignition[942]: INFO : Stage: files May 17 00:20:48.677403 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:20:48.677403 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 17 00:20:48.677403 ignition[942]: DEBUG : files: compiled without relabeling support, skipping May 17 00:20:48.679282 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:20:48.679282 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:20:48.680951 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:20:48.681761 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:20:48.681761 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:20:48.681342 unknown[942]: wrote ssh authorized keys file for user: core May 17 00:20:48.683648 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 17 00:20:48.683648 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 17 00:20:48.972296 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 17 00:20:49.245269 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 17 00:20:49.245269 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 17 00:20:49.247352 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:20:49.247352 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:20:49.247352 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:20:49.247352 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:20:49.247352 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:20:49.247352 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:20:49.247352 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:20:49.252473 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:20:49.252473 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:20:49.252473 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 00:20:49.252473 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 00:20:49.252473 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 00:20:49.252473 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 May 17 00:20:49.858250 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 17 00:20:50.152760 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 00:20:50.152760 ignition[942]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 17 00:20:50.155349 ignition[942]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:20:50.155349 ignition[942]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:20:50.155349 ignition[942]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 17 00:20:50.155349 ignition[942]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 17 00:20:50.155349 ignition[942]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 17 00:20:50.155349 ignition[942]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 17 00:20:50.155349 ignition[942]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 17 00:20:50.155349 ignition[942]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" May 17 00:20:50.155349 ignition[942]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:20:50.155349 ignition[942]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:20:50.155349 ignition[942]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:20:50.155349 ignition[942]: INFO : files: files passed May 17 00:20:50.155349 ignition[942]: INFO : Ignition finished successfully May 17 00:20:50.158300 systemd[1]: Finished ignition-files.service - Ignition (files). May 17 00:20:50.187304 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 17 00:20:50.191506 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 17 00:20:50.193836 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:20:50.203321 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 17 00:20:50.212736 initrd-setup-root-after-ignition[971]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:20:50.212736 initrd-setup-root-after-ignition[971]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 17 00:20:50.215058 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:20:50.216974 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:20:50.218088 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 17 00:20:50.223296 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 17 00:20:50.256797 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:20:50.256939 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 17 00:20:50.258151 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 17 00:20:50.259376 systemd[1]: Reached target initrd.target - Initrd Default Target. May 17 00:20:50.260553 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 17 00:20:50.265317 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 17 00:20:50.279223 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:20:50.285296 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 17 00:20:50.294628 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 17 00:20:50.295216 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:20:50.295847 systemd[1]: Stopped target timers.target - Timer Units. May 17 00:20:50.297233 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:20:50.297365 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:20:50.298780 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 17 00:20:50.299531 systemd[1]: Stopped target basic.target - Basic System. May 17 00:20:50.300507 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 17 00:20:50.301877 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:20:50.303064 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 17 00:20:50.304133 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 17 00:20:50.305334 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:20:50.306543 systemd[1]: Stopped target sysinit.target - System Initialization. May 17 00:20:50.307705 systemd[1]: Stopped target local-fs.target - Local File Systems. May 17 00:20:50.308832 systemd[1]: Stopped target swap.target - Swaps. May 17 00:20:50.309968 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:20:50.310050 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 17 00:20:50.311528 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 17 00:20:50.312370 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:20:50.313424 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 17 00:20:50.315260 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:20:50.315873 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:20:50.315954 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 17 00:20:50.317368 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:20:50.317458 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:20:50.318113 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:20:50.318209 systemd[1]: Stopped ignition-files.service - Ignition (files). May 17 00:20:50.329527 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 17 00:20:50.332361 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 17 00:20:50.332950 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:20:50.333098 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:20:50.335118 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:20:50.335291 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:20:50.342842 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:20:50.343413 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 17 00:20:50.352835 ignition[995]: INFO : Ignition 2.19.0 May 17 00:20:50.354859 ignition[995]: INFO : Stage: umount May 17 00:20:50.354859 ignition[995]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:20:50.354859 ignition[995]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 17 00:20:50.354859 ignition[995]: INFO : umount: umount passed May 17 00:20:50.354859 ignition[995]: INFO : Ignition finished successfully May 17 00:20:50.359824 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:20:50.359941 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 17 00:20:50.360594 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:20:50.360638 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 17 00:20:50.362703 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:20:50.362752 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 17 00:20:50.363385 systemd[1]: ignition-fetch.service: Deactivated successfully. May 17 00:20:50.363433 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 17 00:20:50.364005 systemd[1]: Stopped target network.target - Network. May 17 00:20:50.364444 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:20:50.364684 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:20:50.365213 systemd[1]: Stopped target paths.target - Path Units. May 17 00:20:50.365944 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:20:50.366571 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:20:50.367224 systemd[1]: Stopped target slices.target - Slice Units. May 17 00:20:50.367625 systemd[1]: Stopped target sockets.target - Socket Units. May 17 00:20:50.368095 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:20:50.368137 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:20:50.368858 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:20:50.368896 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:20:50.371005 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:20:50.371055 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 17 00:20:50.373813 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 17 00:20:50.373870 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 17 00:20:50.395684 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 17 00:20:50.396229 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 17 00:20:50.399889 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:20:50.400320 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:20:50.400411 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 17 00:20:50.401657 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:20:50.401729 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 17 00:20:50.406267 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:20:50.406410 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 17 00:20:50.409233 systemd-networkd[768]: eth0: DHCPv6 lease lost May 17 00:20:50.409594 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 17 00:20:50.409656 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:20:50.411320 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:20:50.411437 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 17 00:20:50.412957 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:20:50.413012 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 17 00:20:50.422302 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 17 00:20:50.423625 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:20:50.423692 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:20:50.424989 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:20:50.425038 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 00:20:50.426326 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:20:50.426380 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 17 00:20:50.427525 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:20:50.439061 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:20:50.439240 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 17 00:20:50.441642 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:20:50.441831 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:20:50.443802 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:20:50.443886 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 17 00:20:50.444592 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:20:50.444633 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:20:50.445758 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:20:50.445807 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 17 00:20:50.447370 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:20:50.447417 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 17 00:20:50.448503 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:20:50.448554 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:20:50.455387 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 17 00:20:50.456479 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:20:50.456531 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:20:50.458273 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 17 00:20:50.458317 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:20:50.459010 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:20:50.459059 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:20:50.459691 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:20:50.459737 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:20:50.462295 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:20:50.462406 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 17 00:20:50.463787 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 17 00:20:50.469312 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 17 00:20:50.475442 systemd[1]: Switching root. May 17 00:20:50.506185 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). May 17 00:20:50.506240 systemd-journald[177]: Journal stopped May 17 00:20:51.503748 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:20:51.503770 kernel: SELinux: policy capability open_perms=1 May 17 00:20:51.503779 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:20:51.503786 kernel: SELinux: policy capability always_check_network=0 May 17 00:20:51.503793 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:20:51.503802 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:20:51.503810 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:20:51.503817 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:20:51.503824 kernel: audit: type=1403 audit(1747441250.638:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 00:20:51.503832 systemd[1]: Successfully loaded SELinux policy in 49.083ms. May 17 00:20:51.503842 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.777ms. May 17 00:20:51.503853 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:20:51.503861 systemd[1]: Detected virtualization kvm. May 17 00:20:51.503868 systemd[1]: Detected architecture x86-64. May 17 00:20:51.503876 systemd[1]: Detected first boot. May 17 00:20:51.503887 systemd[1]: Initializing machine ID from random generator. May 17 00:20:51.503895 zram_generator::config[1037]: No configuration found. May 17 00:20:51.503903 systemd[1]: Populated /etc with preset unit settings. May 17 00:20:51.503911 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 17 00:20:51.503919 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 17 00:20:51.503927 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 17 00:20:51.503935 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 17 00:20:51.503943 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 17 00:20:51.503953 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 17 00:20:51.503962 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 17 00:20:51.503970 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 17 00:20:51.503978 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 17 00:20:51.503986 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 17 00:20:51.503994 systemd[1]: Created slice user.slice - User and Session Slice. May 17 00:20:51.504002 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:20:51.504012 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:20:51.504020 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 17 00:20:51.504028 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 17 00:20:51.504036 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 17 00:20:51.504044 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:20:51.504053 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 17 00:20:51.504061 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:20:51.504069 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 17 00:20:51.504077 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 17 00:20:51.504088 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 17 00:20:51.504099 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 17 00:20:51.504107 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:20:51.504116 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:20:51.504124 systemd[1]: Reached target slices.target - Slice Units. May 17 00:20:51.504132 systemd[1]: Reached target swap.target - Swaps. May 17 00:20:51.504140 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 17 00:20:51.504150 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 17 00:20:51.504158 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:20:51.504177 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:20:51.504185 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:20:51.504194 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 17 00:20:51.504204 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 17 00:20:51.504212 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 17 00:20:51.504221 systemd[1]: Mounting media.mount - External Media Directory... May 17 00:20:51.504229 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:20:51.504238 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 17 00:20:51.504246 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 17 00:20:51.504255 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 17 00:20:51.504263 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:20:51.504274 systemd[1]: Reached target machines.target - Containers. May 17 00:20:51.504282 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 17 00:20:51.504291 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:20:51.504299 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:20:51.504307 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 17 00:20:51.504316 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:20:51.504325 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:20:51.504333 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:20:51.504344 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 17 00:20:51.504352 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:20:51.504361 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:20:51.504369 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 17 00:20:51.504377 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 17 00:20:51.504386 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 17 00:20:51.504394 systemd[1]: Stopped systemd-fsck-usr.service. May 17 00:20:51.504404 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:20:51.504412 kernel: loop: module loaded May 17 00:20:51.504422 kernel: fuse: init (API version 7.39) May 17 00:20:51.504430 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:20:51.504439 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 17 00:20:51.504447 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 17 00:20:51.504455 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:20:51.504463 systemd[1]: verity-setup.service: Deactivated successfully. May 17 00:20:51.504472 systemd[1]: Stopped verity-setup.service. May 17 00:20:51.504480 kernel: ACPI: bus type drm_connector registered May 17 00:20:51.504489 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:20:51.504499 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 17 00:20:51.504507 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 17 00:20:51.504515 systemd[1]: Mounted media.mount - External Media Directory. May 17 00:20:51.504524 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 17 00:20:51.504532 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 17 00:20:51.504540 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 17 00:20:51.504549 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:20:51.504557 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:20:51.504582 systemd-journald[1119]: Collecting audit messages is disabled. May 17 00:20:51.504597 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 17 00:20:51.504608 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 17 00:20:51.504617 systemd-journald[1119]: Journal started May 17 00:20:51.504637 systemd-journald[1119]: Runtime Journal (/run/log/journal/738506c1683542ad8ddc5488f1bdc758) is 8.0M, max 78.3M, 70.3M free. May 17 00:20:51.169259 systemd[1]: Queued start job for default target multi-user.target. May 17 00:20:51.185308 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 17 00:20:51.185972 systemd[1]: systemd-journald.service: Deactivated successfully. May 17 00:20:51.526323 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:20:51.524980 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:20:51.525248 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:20:51.526111 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:20:51.526693 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:20:51.527581 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:20:51.527853 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:20:51.528833 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:20:51.529053 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 17 00:20:51.529972 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:20:51.530310 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:20:51.531364 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:20:51.532393 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 17 00:20:51.533419 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 17 00:20:51.550817 systemd[1]: Reached target network-pre.target - Preparation for Network. May 17 00:20:51.558132 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 17 00:20:51.564143 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 17 00:20:51.566223 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:20:51.566254 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:20:51.568749 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 17 00:20:51.574328 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 17 00:20:51.577400 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 17 00:20:51.578062 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:20:51.580837 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 17 00:20:51.583504 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 17 00:20:51.585780 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:20:51.589329 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 17 00:20:51.590464 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:20:51.595305 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:20:51.602857 systemd-journald[1119]: Time spent on flushing to /var/log/journal/738506c1683542ad8ddc5488f1bdc758 is 27.185ms for 970 entries. May 17 00:20:51.602857 systemd-journald[1119]: System Journal (/var/log/journal/738506c1683542ad8ddc5488f1bdc758) is 8.0M, max 195.6M, 187.6M free. May 17 00:20:51.645402 systemd-journald[1119]: Received client request to flush runtime journal. May 17 00:20:51.645522 kernel: loop0: detected capacity change from 0 to 8 May 17 00:20:51.609327 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 17 00:20:51.614343 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:20:51.622717 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:20:51.625545 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 17 00:20:51.627300 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 17 00:20:51.628041 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 17 00:20:51.642402 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 17 00:20:51.658318 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:20:51.653540 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 17 00:20:51.654575 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 17 00:20:51.662752 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 17 00:20:51.671037 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 17 00:20:51.695695 kernel: loop1: detected capacity change from 0 to 140768 May 17 00:20:51.706929 systemd-tmpfiles[1158]: ACLs are not supported, ignoring. May 17 00:20:51.706947 systemd-tmpfiles[1158]: ACLs are not supported, ignoring. May 17 00:20:51.717417 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:20:51.719774 udevadm[1165]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 17 00:20:51.721626 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:20:51.724942 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:20:51.728676 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 17 00:20:51.738272 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 17 00:20:51.747207 kernel: loop2: detected capacity change from 0 to 224512 May 17 00:20:51.770133 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 17 00:20:51.779333 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:20:51.800584 kernel: loop3: detected capacity change from 0 to 142488 May 17 00:20:51.808041 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. May 17 00:20:51.809597 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. May 17 00:20:51.816850 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:20:51.842193 kernel: loop4: detected capacity change from 0 to 8 May 17 00:20:51.846948 kernel: loop5: detected capacity change from 0 to 140768 May 17 00:20:51.868187 kernel: loop6: detected capacity change from 0 to 224512 May 17 00:20:51.897696 kernel: loop7: detected capacity change from 0 to 142488 May 17 00:20:51.918043 (sd-merge)[1187]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. May 17 00:20:51.918952 (sd-merge)[1187]: Merged extensions into '/usr'. May 17 00:20:51.924095 systemd[1]: Reloading requested from client PID 1157 ('systemd-sysext') (unit systemd-sysext.service)... May 17 00:20:51.924309 systemd[1]: Reloading... May 17 00:20:52.003198 zram_generator::config[1212]: No configuration found. May 17 00:20:52.109074 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:20:52.141798 systemd[1]: Reloading finished in 216 ms. May 17 00:20:52.146943 ldconfig[1152]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:20:52.165234 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 17 00:20:52.167896 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 17 00:20:52.177318 systemd[1]: Starting ensure-sysext.service... May 17 00:20:52.179329 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:20:52.191229 systemd[1]: Reloading requested from client PID 1256 ('systemctl') (unit ensure-sysext.service)... May 17 00:20:52.191242 systemd[1]: Reloading... May 17 00:20:52.214880 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:20:52.215459 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 17 00:20:52.216276 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:20:52.216544 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. May 17 00:20:52.216652 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. May 17 00:20:52.219759 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:20:52.219824 systemd-tmpfiles[1257]: Skipping /boot May 17 00:20:52.233877 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:20:52.233939 systemd-tmpfiles[1257]: Skipping /boot May 17 00:20:52.300148 zram_generator::config[1283]: No configuration found. May 17 00:20:52.401495 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:20:52.440441 systemd[1]: Reloading finished in 248 ms. May 17 00:20:52.459430 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 17 00:20:52.464627 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:20:52.477401 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:20:52.481371 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 17 00:20:52.485515 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 17 00:20:52.488718 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:20:52.493270 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:20:52.501856 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 17 00:20:52.504156 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:20:52.504322 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:20:52.508396 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:20:52.512391 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:20:52.516376 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:20:52.517563 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:20:52.517657 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:20:52.530340 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 17 00:20:52.531776 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:20:52.531942 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:20:52.532088 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:20:52.533188 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:20:52.534991 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:20:52.537187 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:20:52.541391 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:20:52.543339 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:20:52.543460 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:20:52.549556 systemd[1]: Finished ensure-sysext.service. May 17 00:20:52.562745 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:20:52.563365 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:20:52.572084 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:20:52.572560 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:20:52.590128 systemd-udevd[1334]: Using default interface naming scheme 'v255'. May 17 00:20:52.592416 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 17 00:20:52.594672 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 17 00:20:52.596380 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 17 00:20:52.598506 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:20:52.599086 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:20:52.600604 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:20:52.601243 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:20:52.607322 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 17 00:20:52.619695 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:20:52.619763 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:20:52.627339 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 17 00:20:52.628324 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 17 00:20:52.629741 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:20:52.643561 augenrules[1370]: No rules May 17 00:20:52.643818 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:20:52.653804 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 17 00:20:52.656466 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:20:52.665328 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:20:52.741522 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 17 00:20:52.763952 systemd-resolved[1333]: Positive Trust Anchors: May 17 00:20:52.763972 systemd-resolved[1333]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:20:52.763999 systemd-resolved[1333]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:20:52.780632 systemd-resolved[1333]: Defaulting to hostname 'linux'. May 17 00:20:52.784147 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 17 00:20:52.784902 systemd[1]: Reached target time-set.target - System Time Set. May 17 00:20:52.786318 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:20:52.786901 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:20:52.799915 systemd-networkd[1381]: lo: Link UP May 17 00:20:52.800253 systemd-networkd[1381]: lo: Gained carrier May 17 00:20:52.803460 systemd-networkd[1381]: Enumeration completed May 17 00:20:52.803532 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:20:52.804463 systemd-networkd[1381]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:20:52.804467 systemd-networkd[1381]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:20:52.805247 systemd[1]: Reached target network.target - Network. May 17 00:20:52.807821 systemd-networkd[1381]: eth0: Link UP May 17 00:20:52.807913 systemd-networkd[1381]: eth0: Gained carrier May 17 00:20:52.807964 systemd-networkd[1381]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:20:52.811454 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 17 00:20:52.820991 systemd-networkd[1381]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:20:52.844233 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1392) May 17 00:20:52.849197 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 17 00:20:52.856238 kernel: ACPI: button: Power Button [PWRF] May 17 00:20:52.876107 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 17 00:20:52.876429 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 17 00:20:52.877674 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 17 00:20:52.887881 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 17 00:20:52.895672 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 17 00:20:52.900186 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 17 00:20:52.912250 kernel: EDAC MC: Ver: 3.0.0 May 17 00:20:52.918239 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 17 00:20:52.953437 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:20:52.957190 kernel: mousedev: PS/2 mouse device common for all mice May 17 00:20:52.976700 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 17 00:20:52.982382 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 17 00:20:52.995025 lvm[1419]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:20:53.021501 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 17 00:20:53.022322 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:20:53.029309 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 17 00:20:53.081915 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:20:53.083091 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:20:53.084265 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 17 00:20:53.085101 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 17 00:20:53.086421 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 17 00:20:53.087120 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 17 00:20:53.087477 lvm[1424]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:20:53.088070 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 17 00:20:53.088720 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:20:53.088803 systemd[1]: Reached target paths.target - Path Units. May 17 00:20:53.089455 systemd[1]: Reached target timers.target - Timer Units. May 17 00:20:53.091367 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 17 00:20:53.094152 systemd[1]: Starting docker.socket - Docker Socket for the API... May 17 00:20:53.102188 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 17 00:20:53.103298 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 17 00:20:53.103923 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:20:53.104467 systemd[1]: Reached target basic.target - Basic System. May 17 00:20:53.105007 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 17 00:20:53.105044 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 17 00:20:53.106147 systemd[1]: Starting containerd.service - containerd container runtime... May 17 00:20:53.109328 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 17 00:20:53.112354 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 17 00:20:53.115258 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 17 00:20:53.118335 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 17 00:20:53.120141 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 17 00:20:53.122945 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 17 00:20:53.134327 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 17 00:20:53.137305 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 17 00:20:53.142361 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 17 00:20:53.154384 systemd[1]: Starting systemd-logind.service - User Login Management... May 17 00:20:53.156040 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 00:20:53.157604 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 17 00:20:53.158321 systemd[1]: Starting update-engine.service - Update Engine... May 17 00:20:53.162276 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 17 00:20:53.163709 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 17 00:20:53.173575 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:20:53.178301 extend-filesystems[1432]: Found loop4 May 17 00:20:53.178301 extend-filesystems[1432]: Found loop5 May 17 00:20:53.178301 extend-filesystems[1432]: Found loop6 May 17 00:20:53.178301 extend-filesystems[1432]: Found loop7 May 17 00:20:53.178301 extend-filesystems[1432]: Found sda May 17 00:20:53.178301 extend-filesystems[1432]: Found sda1 May 17 00:20:53.178301 extend-filesystems[1432]: Found sda2 May 17 00:20:53.178301 extend-filesystems[1432]: Found sda3 May 17 00:20:53.178301 extend-filesystems[1432]: Found usr May 17 00:20:53.178301 extend-filesystems[1432]: Found sda4 May 17 00:20:53.178301 extend-filesystems[1432]: Found sda6 May 17 00:20:53.178301 extend-filesystems[1432]: Found sda7 May 17 00:20:53.178301 extend-filesystems[1432]: Found sda9 May 17 00:20:53.178301 extend-filesystems[1432]: Checking size of /dev/sda9 May 17 00:20:53.173766 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 17 00:20:53.230273 dbus-daemon[1430]: [system] SELinux support is enabled May 17 00:20:53.282027 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks May 17 00:20:53.282050 jq[1431]: false May 17 00:20:53.282133 extend-filesystems[1432]: Resized partition /dev/sda9 May 17 00:20:53.285513 coreos-metadata[1429]: May 17 00:20:53.221 INFO Putting http://169.254.169.254/v1/token: Attempt #1 May 17 00:20:53.190615 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:20:53.252150 dbus-daemon[1430]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1381 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 17 00:20:53.287312 extend-filesystems[1468]: resize2fs 1.47.1 (20-May-2024) May 17 00:20:53.297213 tar[1444]: linux-amd64/LICENSE May 17 00:20:53.297213 tar[1444]: linux-amd64/helm May 17 00:20:53.191202 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 17 00:20:53.257898 dbus-daemon[1430]: [system] Successfully activated service 'org.freedesktop.systemd1' May 17 00:20:53.226974 (ntainerd)[1455]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 17 00:20:53.230540 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 17 00:20:53.231559 systemd-networkd[1381]: eth0: DHCPv4 address 172.233.222.125/24, gateway 172.233.222.1 acquired from 23.210.200.22 May 17 00:20:53.301082 jq[1442]: true May 17 00:20:53.235755 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:20:53.301287 update_engine[1441]: I20250517 00:20:53.278273 1441 main.cc:92] Flatcar Update Engine starting May 17 00:20:53.301287 update_engine[1441]: I20250517 00:20:53.295303 1441 update_check_scheduler.cc:74] Next update check in 9m31s May 17 00:20:53.235790 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 17 00:20:53.236322 systemd-timesyncd[1352]: Network configuration changed, trying to establish connection. May 17 00:20:53.237032 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:20:53.301894 jq[1465]: true May 17 00:20:53.237053 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 17 00:20:53.238564 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:20:53.238823 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 17 00:20:53.269319 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... May 17 00:20:53.294522 systemd[1]: Started update-engine.service - Update Engine. May 17 00:20:53.306280 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 17 00:20:53.390196 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1393) May 17 00:20:53.428540 systemd-logind[1437]: Watching system buttons on /dev/input/event1 (Power Button) May 17 00:20:53.428577 systemd-logind[1437]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 17 00:20:53.428811 systemd-logind[1437]: New seat seat0. May 17 00:20:53.429624 systemd[1]: Started systemd-logind.service - User Login Management. May 17 00:20:53.506696 bash[1495]: Updated "/home/core/.ssh/authorized_keys" May 17 00:20:53.505807 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 17 00:20:53.517440 systemd[1]: Starting sshkeys.service... May 17 00:20:53.536423 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 17 00:20:53.540791 locksmithd[1472]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:20:53.548398 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 17 00:20:53.558929 dbus-daemon[1430]: [system] Successfully activated service 'org.freedesktop.hostname1' May 17 00:20:53.559448 dbus-daemon[1430]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1467 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 17 00:20:53.560245 systemd[1]: Started systemd-hostnamed.service - Hostname Service. May 17 00:20:53.570418 systemd[1]: Starting polkit.service - Authorization Manager... May 17 00:20:53.595565 containerd[1455]: time="2025-05-17T00:20:53.595489201Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 17 00:20:53.601695 polkitd[1504]: Started polkitd version 121 May 17 00:20:53.611195 kernel: EXT4-fs (sda9): resized filesystem to 20360187 May 17 00:20:53.616304 polkitd[1504]: Loading rules from directory /etc/polkit-1/rules.d May 17 00:20:53.616414 polkitd[1504]: Loading rules from directory /usr/share/polkit-1/rules.d May 17 00:20:53.619602 systemd[1]: Started polkit.service - Authorization Manager. May 17 00:20:53.617421 polkitd[1504]: Finished loading, compiling and executing 2 rules May 17 00:20:53.619500 dbus-daemon[1430]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 17 00:20:53.621016 polkitd[1504]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 17 00:20:53.624111 extend-filesystems[1468]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required May 17 00:20:53.624111 extend-filesystems[1468]: old_desc_blocks = 1, new_desc_blocks = 10 May 17 00:20:53.624111 extend-filesystems[1468]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. May 17 00:20:53.664967 extend-filesystems[1432]: Resized filesystem in /dev/sda9 May 17 00:20:53.665710 sshd_keygen[1471]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:20:53.625697 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:20:53.627236 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 17 00:20:53.679594 containerd[1455]: time="2025-05-17T00:20:53.679353859Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:20:53.682742 coreos-metadata[1503]: May 17 00:20:53.681 INFO Putting http://169.254.169.254/v1/token: Attempt #1 May 17 00:20:53.682995 systemd-hostnamed[1467]: Hostname set to <172-233-222-125> (transient) May 17 00:20:53.683500 containerd[1455]: time="2025-05-17T00:20:53.683464917Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.90-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:20:53.683561 containerd[1455]: time="2025-05-17T00:20:53.683546657Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:20:53.683628 containerd[1455]: time="2025-05-17T00:20:53.683613787Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:20:53.683708 systemd-resolved[1333]: System hostname changed to '172-233-222-125'. May 17 00:20:53.683882 containerd[1455]: time="2025-05-17T00:20:53.683846197Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 17 00:20:53.683882 containerd[1455]: time="2025-05-17T00:20:53.683866117Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 17 00:20:53.684021 containerd[1455]: time="2025-05-17T00:20:53.683948827Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:20:53.684021 containerd[1455]: time="2025-05-17T00:20:53.683972417Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:20:53.684281 containerd[1455]: time="2025-05-17T00:20:53.684239267Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:20:53.684310 containerd[1455]: time="2025-05-17T00:20:53.684283507Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:20:53.684310 containerd[1455]: time="2025-05-17T00:20:53.684304087Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:20:53.684355 containerd[1455]: time="2025-05-17T00:20:53.684317357Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:20:53.684453 containerd[1455]: time="2025-05-17T00:20:53.684422407Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:20:53.684738 containerd[1455]: time="2025-05-17T00:20:53.684705437Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:20:53.684890 containerd[1455]: time="2025-05-17T00:20:53.684859547Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:20:53.684890 containerd[1455]: time="2025-05-17T00:20:53.684886047Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:20:53.685041 containerd[1455]: time="2025-05-17T00:20:53.685013927Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:20:53.685110 containerd[1455]: time="2025-05-17T00:20:53.685084857Z" level=info msg="metadata content store policy set" policy=shared May 17 00:20:53.689329 containerd[1455]: time="2025-05-17T00:20:53.689298604Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:20:53.689413 containerd[1455]: time="2025-05-17T00:20:53.689355284Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:20:53.689438 containerd[1455]: time="2025-05-17T00:20:53.689420894Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 17 00:20:53.689455 containerd[1455]: time="2025-05-17T00:20:53.689443164Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 17 00:20:53.689489 containerd[1455]: time="2025-05-17T00:20:53.689463684Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:20:53.689651 containerd[1455]: time="2025-05-17T00:20:53.689618984Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:20:53.689964 containerd[1455]: time="2025-05-17T00:20:53.689939074Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:20:53.690102 containerd[1455]: time="2025-05-17T00:20:53.690073774Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 17 00:20:53.690125 containerd[1455]: time="2025-05-17T00:20:53.690102304Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 17 00:20:53.690148 containerd[1455]: time="2025-05-17T00:20:53.690120604Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 17 00:20:53.690148 containerd[1455]: time="2025-05-17T00:20:53.690141414Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:20:53.690191 containerd[1455]: time="2025-05-17T00:20:53.690177994Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:20:53.690207 containerd[1455]: time="2025-05-17T00:20:53.690195384Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:20:53.690224 containerd[1455]: time="2025-05-17T00:20:53.690213524Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:20:53.690248 containerd[1455]: time="2025-05-17T00:20:53.690231794Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:20:53.690265 containerd[1455]: time="2025-05-17T00:20:53.690250304Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:20:53.690281 containerd[1455]: time="2025-05-17T00:20:53.690270634Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:20:53.690297 containerd[1455]: time="2025-05-17T00:20:53.690285914Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:20:53.690831 containerd[1455]: time="2025-05-17T00:20:53.690311424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:20:53.690831 containerd[1455]: time="2025-05-17T00:20:53.690334164Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:20:53.690831 containerd[1455]: time="2025-05-17T00:20:53.690348824Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:20:53.690831 containerd[1455]: time="2025-05-17T00:20:53.690364794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:20:53.690831 containerd[1455]: time="2025-05-17T00:20:53.690385034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:20:53.690831 containerd[1455]: time="2025-05-17T00:20:53.690401744Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:20:53.690831 containerd[1455]: time="2025-05-17T00:20:53.690417344Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:20:53.690831 containerd[1455]: time="2025-05-17T00:20:53.690432864Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:20:53.690831 containerd[1455]: time="2025-05-17T00:20:53.690449414Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 17 00:20:53.690831 containerd[1455]: time="2025-05-17T00:20:53.690466624Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 17 00:20:53.690831 containerd[1455]: time="2025-05-17T00:20:53.690480074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:20:53.690831 containerd[1455]: time="2025-05-17T00:20:53.690493984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 17 00:20:53.690831 containerd[1455]: time="2025-05-17T00:20:53.690510204Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:20:53.690831 containerd[1455]: time="2025-05-17T00:20:53.690544864Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 17 00:20:53.690831 containerd[1455]: time="2025-05-17T00:20:53.690568364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 17 00:20:53.691041 containerd[1455]: time="2025-05-17T00:20:53.690583264Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:20:53.691041 containerd[1455]: time="2025-05-17T00:20:53.690595724Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:20:53.691041 containerd[1455]: time="2025-05-17T00:20:53.690658804Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:20:53.691041 containerd[1455]: time="2025-05-17T00:20:53.690677874Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 17 00:20:53.691041 containerd[1455]: time="2025-05-17T00:20:53.690690334Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:20:53.691041 containerd[1455]: time="2025-05-17T00:20:53.690703704Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 17 00:20:53.691041 containerd[1455]: time="2025-05-17T00:20:53.690715154Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:20:53.691041 containerd[1455]: time="2025-05-17T00:20:53.690728774Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 17 00:20:53.691041 containerd[1455]: time="2025-05-17T00:20:53.690750824Z" level=info msg="NRI interface is disabled by configuration." May 17 00:20:53.691041 containerd[1455]: time="2025-05-17T00:20:53.690763134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:20:53.692067 containerd[1455]: time="2025-05-17T00:20:53.691065454Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:20:53.692067 containerd[1455]: time="2025-05-17T00:20:53.691122194Z" level=info msg="Connect containerd service" May 17 00:20:53.692067 containerd[1455]: time="2025-05-17T00:20:53.691509663Z" level=info msg="using legacy CRI server" May 17 00:20:53.692067 containerd[1455]: time="2025-05-17T00:20:53.691535523Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 17 00:20:53.692067 containerd[1455]: time="2025-05-17T00:20:53.691654243Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:20:53.698752 containerd[1455]: time="2025-05-17T00:20:53.696827431Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:20:53.698752 containerd[1455]: time="2025-05-17T00:20:53.697573790Z" level=info msg="Start subscribing containerd event" May 17 00:20:53.698752 containerd[1455]: time="2025-05-17T00:20:53.697647480Z" level=info msg="Start recovering state" May 17 00:20:53.698752 containerd[1455]: time="2025-05-17T00:20:53.697732410Z" level=info msg="Start event monitor" May 17 00:20:53.698752 containerd[1455]: time="2025-05-17T00:20:53.697743790Z" level=info msg="Start snapshots syncer" May 17 00:20:53.698752 containerd[1455]: time="2025-05-17T00:20:53.697754130Z" level=info msg="Start cni network conf syncer for default" May 17 00:20:53.698752 containerd[1455]: time="2025-05-17T00:20:53.697766070Z" level=info msg="Start streaming server" May 17 00:20:53.699197 containerd[1455]: time="2025-05-17T00:20:53.698992570Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:20:53.699197 containerd[1455]: time="2025-05-17T00:20:53.699060940Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:20:53.699197 containerd[1455]: time="2025-05-17T00:20:53.699134280Z" level=info msg="containerd successfully booted in 0.108500s" May 17 00:20:53.699288 systemd[1]: Started containerd.service - containerd container runtime. May 17 00:20:53.704225 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 17 00:20:53.714314 systemd[1]: Starting issuegen.service - Generate /run/issue... May 17 00:20:53.722567 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:20:53.723402 systemd[1]: Finished issuegen.service - Generate /run/issue. May 17 00:20:53.735285 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 17 00:20:53.747804 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 17 00:20:53.756692 systemd[1]: Started getty@tty1.service - Getty on tty1. May 17 00:20:53.764500 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 17 00:20:53.765512 systemd[1]: Reached target getty.target - Login Prompts. May 17 00:20:53.776360 coreos-metadata[1503]: May 17 00:20:53.776 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 May 17 00:20:53.911401 coreos-metadata[1503]: May 17 00:20:53.910 INFO Fetch successful May 17 00:20:53.936316 update-ssh-keys[1538]: Updated "/home/core/.ssh/authorized_keys" May 17 00:20:53.938240 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 17 00:20:53.942400 systemd[1]: Finished sshkeys.service. May 17 00:20:53.963539 tar[1444]: linux-amd64/README.md May 17 00:20:53.975721 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 17 00:20:54.237997 coreos-metadata[1429]: May 17 00:20:54.237 INFO Putting http://169.254.169.254/v1/token: Attempt #2 May 17 00:20:54.329851 coreos-metadata[1429]: May 17 00:20:54.329 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 May 17 00:20:54.513723 coreos-metadata[1429]: May 17 00:20:54.513 INFO Fetch successful May 17 00:20:54.513944 coreos-metadata[1429]: May 17 00:20:54.513 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 May 17 00:20:54.562467 systemd-networkd[1381]: eth0: Gained IPv6LL May 17 00:20:54.563977 systemd-timesyncd[1352]: Network configuration changed, trying to establish connection. May 17 00:20:54.567470 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 17 00:20:54.568587 systemd[1]: Reached target network-online.target - Network is Online. May 17 00:20:54.576668 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:20:54.579470 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 17 00:20:54.606287 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 17 00:20:54.769563 coreos-metadata[1429]: May 17 00:20:54.769 INFO Fetch successful May 17 00:20:54.865353 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 17 00:20:54.866435 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 17 00:20:55.522411 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:20:55.523487 systemd[1]: Reached target multi-user.target - Multi-User System. May 17 00:20:55.526855 systemd[1]: Startup finished in 806ms (kernel) + 6.942s (initrd) + 4.937s (userspace) = 12.685s. May 17 00:20:55.552604 (kubelet)[1582]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:20:56.054379 kubelet[1582]: E0517 00:20:56.054298 1582 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:20:56.058707 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:20:56.059001 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:20:56.063486 systemd-timesyncd[1352]: Network configuration changed, trying to establish connection. May 17 00:20:57.302320 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 17 00:20:57.303355 systemd[1]: Started sshd@0-172.233.222.125:22-139.178.89.65:48958.service - OpenSSH per-connection server daemon (139.178.89.65:48958). May 17 00:20:57.645978 sshd[1594]: Accepted publickey for core from 139.178.89.65 port 48958 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:20:57.648712 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:20:57.658447 systemd-logind[1437]: New session 1 of user core. May 17 00:20:57.660452 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 17 00:20:57.672566 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 17 00:20:57.687611 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 17 00:20:57.695437 systemd[1]: Starting user@500.service - User Manager for UID 500... May 17 00:20:57.698987 (systemd)[1598]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:20:57.799063 systemd[1598]: Queued start job for default target default.target. May 17 00:20:57.806531 systemd[1598]: Created slice app.slice - User Application Slice. May 17 00:20:57.806559 systemd[1598]: Reached target paths.target - Paths. May 17 00:20:57.806570 systemd[1598]: Reached target timers.target - Timers. May 17 00:20:57.808509 systemd[1598]: Starting dbus.socket - D-Bus User Message Bus Socket... May 17 00:20:57.822496 systemd[1598]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 17 00:20:57.822654 systemd[1598]: Reached target sockets.target - Sockets. May 17 00:20:57.822674 systemd[1598]: Reached target basic.target - Basic System. May 17 00:20:57.822726 systemd[1598]: Reached target default.target - Main User Target. May 17 00:20:57.822769 systemd[1598]: Startup finished in 115ms. May 17 00:20:57.822911 systemd[1]: Started user@500.service - User Manager for UID 500. May 17 00:20:57.827011 systemd-timesyncd[1352]: Network configuration changed, trying to establish connection. May 17 00:20:57.828529 systemd[1]: Started session-1.scope - Session 1 of User core. May 17 00:20:58.089020 systemd[1]: Started sshd@1-172.233.222.125:22-139.178.89.65:48964.service - OpenSSH per-connection server daemon (139.178.89.65:48964). May 17 00:20:58.414437 sshd[1609]: Accepted publickey for core from 139.178.89.65 port 48964 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:20:58.416535 sshd[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:20:58.421564 systemd-logind[1437]: New session 2 of user core. May 17 00:20:58.429946 systemd[1]: Started session-2.scope - Session 2 of User core. May 17 00:20:58.660465 sshd[1609]: pam_unix(sshd:session): session closed for user core May 17 00:20:58.664079 systemd[1]: sshd@1-172.233.222.125:22-139.178.89.65:48964.service: Deactivated successfully. May 17 00:20:58.665762 systemd[1]: session-2.scope: Deactivated successfully. May 17 00:20:58.667417 systemd-logind[1437]: Session 2 logged out. Waiting for processes to exit. May 17 00:20:58.669092 systemd-logind[1437]: Removed session 2. May 17 00:20:58.718964 systemd[1]: Started sshd@2-172.233.222.125:22-139.178.89.65:48972.service - OpenSSH per-connection server daemon (139.178.89.65:48972). May 17 00:20:59.043925 sshd[1616]: Accepted publickey for core from 139.178.89.65 port 48972 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:20:59.045190 sshd[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:20:59.048643 systemd-logind[1437]: New session 3 of user core. May 17 00:20:59.053257 systemd[1]: Started session-3.scope - Session 3 of User core. May 17 00:20:59.286495 sshd[1616]: pam_unix(sshd:session): session closed for user core May 17 00:20:59.288833 systemd[1]: sshd@2-172.233.222.125:22-139.178.89.65:48972.service: Deactivated successfully. May 17 00:20:59.290235 systemd[1]: session-3.scope: Deactivated successfully. May 17 00:20:59.291106 systemd-logind[1437]: Session 3 logged out. Waiting for processes to exit. May 17 00:20:59.291798 systemd-logind[1437]: Removed session 3. May 17 00:20:59.345583 systemd[1]: Started sshd@3-172.233.222.125:22-139.178.89.65:48988.service - OpenSSH per-connection server daemon (139.178.89.65:48988). May 17 00:20:59.676456 sshd[1623]: Accepted publickey for core from 139.178.89.65 port 48988 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:20:59.677962 sshd[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:20:59.680646 systemd-logind[1437]: New session 4 of user core. May 17 00:20:59.688255 systemd[1]: Started session-4.scope - Session 4 of User core. May 17 00:20:59.926648 sshd[1623]: pam_unix(sshd:session): session closed for user core May 17 00:20:59.932990 systemd[1]: sshd@3-172.233.222.125:22-139.178.89.65:48988.service: Deactivated successfully. May 17 00:20:59.935873 systemd[1]: session-4.scope: Deactivated successfully. May 17 00:20:59.936716 systemd-logind[1437]: Session 4 logged out. Waiting for processes to exit. May 17 00:20:59.938049 systemd-logind[1437]: Removed session 4. May 17 00:20:59.992708 systemd[1]: Started sshd@4-172.233.222.125:22-139.178.89.65:49004.service - OpenSSH per-connection server daemon (139.178.89.65:49004). May 17 00:21:00.327446 sshd[1630]: Accepted publickey for core from 139.178.89.65 port 49004 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:21:00.328863 sshd[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:21:00.332766 systemd-logind[1437]: New session 5 of user core. May 17 00:21:00.340268 systemd[1]: Started session-5.scope - Session 5 of User core. May 17 00:21:00.536801 sudo[1633]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 17 00:21:00.537207 sudo[1633]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:21:00.555059 sudo[1633]: pam_unix(sudo:session): session closed for user root May 17 00:21:00.608730 sshd[1630]: pam_unix(sshd:session): session closed for user core May 17 00:21:00.613474 systemd[1]: sshd@4-172.233.222.125:22-139.178.89.65:49004.service: Deactivated successfully. May 17 00:21:00.615071 systemd[1]: session-5.scope: Deactivated successfully. May 17 00:21:00.615600 systemd-logind[1437]: Session 5 logged out. Waiting for processes to exit. May 17 00:21:00.616620 systemd-logind[1437]: Removed session 5. May 17 00:21:00.669966 systemd[1]: Started sshd@5-172.233.222.125:22-139.178.89.65:49020.service - OpenSSH per-connection server daemon (139.178.89.65:49020). May 17 00:21:01.017923 sshd[1638]: Accepted publickey for core from 139.178.89.65 port 49020 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:21:01.019664 sshd[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:21:01.023441 systemd-logind[1437]: New session 6 of user core. May 17 00:21:01.035267 systemd[1]: Started session-6.scope - Session 6 of User core. May 17 00:21:01.222502 sudo[1642]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 17 00:21:01.222873 sudo[1642]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:21:01.226663 sudo[1642]: pam_unix(sudo:session): session closed for user root May 17 00:21:01.232324 sudo[1641]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 17 00:21:01.232663 sudo[1641]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:21:01.243412 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 17 00:21:01.246455 auditctl[1645]: No rules May 17 00:21:01.246785 systemd[1]: audit-rules.service: Deactivated successfully. May 17 00:21:01.246966 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 17 00:21:01.248998 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:21:01.273599 augenrules[1663]: No rules May 17 00:21:01.276058 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:21:01.277674 sudo[1641]: pam_unix(sudo:session): session closed for user root May 17 00:21:01.330799 sshd[1638]: pam_unix(sshd:session): session closed for user core May 17 00:21:01.333929 systemd[1]: sshd@5-172.233.222.125:22-139.178.89.65:49020.service: Deactivated successfully. May 17 00:21:01.335786 systemd[1]: session-6.scope: Deactivated successfully. May 17 00:21:01.336984 systemd-logind[1437]: Session 6 logged out. Waiting for processes to exit. May 17 00:21:01.338041 systemd-logind[1437]: Removed session 6. May 17 00:21:01.386258 systemd[1]: Started sshd@6-172.233.222.125:22-139.178.89.65:49028.service - OpenSSH per-connection server daemon (139.178.89.65:49028). May 17 00:21:01.709297 sshd[1671]: Accepted publickey for core from 139.178.89.65 port 49028 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:21:01.710522 sshd[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:21:01.714419 systemd-logind[1437]: New session 7 of user core. May 17 00:21:01.724260 systemd[1]: Started session-7.scope - Session 7 of User core. May 17 00:21:01.910128 sudo[1674]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:21:01.910468 sudo[1674]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:21:02.165416 systemd[1]: Starting docker.service - Docker Application Container Engine... May 17 00:21:02.174536 (dockerd)[1689]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 17 00:21:02.437711 dockerd[1689]: time="2025-05-17T00:21:02.437199619Z" level=info msg="Starting up" May 17 00:21:02.545519 dockerd[1689]: time="2025-05-17T00:21:02.545464655Z" level=info msg="Loading containers: start." May 17 00:21:02.650208 kernel: Initializing XFRM netlink socket May 17 00:21:02.683368 systemd-timesyncd[1352]: Network configuration changed, trying to establish connection. May 17 00:21:02.696005 systemd-timesyncd[1352]: Network configuration changed, trying to establish connection. May 17 00:21:02.733508 systemd-networkd[1381]: docker0: Link UP May 17 00:21:02.734156 systemd-timesyncd[1352]: Network configuration changed, trying to establish connection. May 17 00:21:02.749548 dockerd[1689]: time="2025-05-17T00:21:02.749501723Z" level=info msg="Loading containers: done." May 17 00:21:02.765392 dockerd[1689]: time="2025-05-17T00:21:02.765328075Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 00:21:02.765657 dockerd[1689]: time="2025-05-17T00:21:02.765444065Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 17 00:21:02.765657 dockerd[1689]: time="2025-05-17T00:21:02.765557765Z" level=info msg="Daemon has completed initialization" May 17 00:21:02.793726 dockerd[1689]: time="2025-05-17T00:21:02.793623891Z" level=info msg="API listen on /run/docker.sock" May 17 00:21:02.794499 systemd[1]: Started docker.service - Docker Application Container Engine. May 17 00:21:03.544311 containerd[1455]: time="2025-05-17T00:21:03.544260426Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\"" May 17 00:21:04.389308 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3733777229.mount: Deactivated successfully. May 17 00:21:05.505570 containerd[1455]: time="2025-05-17T00:21:05.505470155Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:05.506968 containerd[1455]: time="2025-05-17T00:21:05.506877114Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.5: active requests=0, bytes read=28797811" May 17 00:21:05.508970 containerd[1455]: time="2025-05-17T00:21:05.507332044Z" level=info msg="ImageCreate event name:\"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:05.510053 containerd[1455]: time="2025-05-17T00:21:05.509665493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:05.510618 containerd[1455]: time="2025-05-17T00:21:05.510585572Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.5\" with image id \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\", size \"28794611\" in 1.966283566s" May 17 00:21:05.510656 containerd[1455]: time="2025-05-17T00:21:05.510624332Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\" returns image reference \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\"" May 17 00:21:05.514054 containerd[1455]: time="2025-05-17T00:21:05.513958521Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\"" May 17 00:21:06.178749 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 00:21:06.184594 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:21:06.346778 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:21:06.351588 (kubelet)[1891]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:21:06.387924 kubelet[1891]: E0517 00:21:06.387857 1891 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:21:06.393561 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:21:06.393748 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:21:07.334820 containerd[1455]: time="2025-05-17T00:21:07.334759170Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:07.335838 containerd[1455]: time="2025-05-17T00:21:07.335780620Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.5: active requests=0, bytes read=24782523" May 17 00:21:07.336436 containerd[1455]: time="2025-05-17T00:21:07.336419129Z" level=info msg="ImageCreate event name:\"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:07.338877 containerd[1455]: time="2025-05-17T00:21:07.338670888Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:07.339410 containerd[1455]: time="2025-05-17T00:21:07.339379538Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.5\" with image id \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\", size \"26384363\" in 1.825262277s" May 17 00:21:07.339447 containerd[1455]: time="2025-05-17T00:21:07.339412248Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\" returns image reference \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\"" May 17 00:21:07.341224 containerd[1455]: time="2025-05-17T00:21:07.341200277Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\"" May 17 00:21:09.006298 containerd[1455]: time="2025-05-17T00:21:09.005302895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:09.006298 containerd[1455]: time="2025-05-17T00:21:09.006143014Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.5: active requests=0, bytes read=19176063" May 17 00:21:09.007278 containerd[1455]: time="2025-05-17T00:21:09.007220314Z" level=info msg="ImageCreate event name:\"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:09.010128 containerd[1455]: time="2025-05-17T00:21:09.010086222Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:09.010996 containerd[1455]: time="2025-05-17T00:21:09.010875532Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.5\" with image id \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\", size \"20777921\" in 1.669648395s" May 17 00:21:09.010996 containerd[1455]: time="2025-05-17T00:21:09.010901752Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\" returns image reference \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\"" May 17 00:21:09.011809 containerd[1455]: time="2025-05-17T00:21:09.011779461Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\"" May 17 00:21:10.242758 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3400038865.mount: Deactivated successfully. May 17 00:21:10.526149 containerd[1455]: time="2025-05-17T00:21:10.526017044Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:10.527211 containerd[1455]: time="2025-05-17T00:21:10.527158713Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.5: active requests=0, bytes read=30892872" May 17 00:21:10.528446 containerd[1455]: time="2025-05-17T00:21:10.527802933Z" level=info msg="ImageCreate event name:\"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:10.529016 containerd[1455]: time="2025-05-17T00:21:10.528982453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:10.529768 containerd[1455]: time="2025-05-17T00:21:10.529488602Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.5\" with image id \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\", repo tag \"registry.k8s.io/kube-proxy:v1.32.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\", size \"30891891\" in 1.517681041s" May 17 00:21:10.529768 containerd[1455]: time="2025-05-17T00:21:10.529514382Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\" returns image reference \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\"" May 17 00:21:10.530713 containerd[1455]: time="2025-05-17T00:21:10.530688982Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 17 00:21:11.140881 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount386831666.mount: Deactivated successfully. May 17 00:21:11.792368 containerd[1455]: time="2025-05-17T00:21:11.792320621Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:11.793435 containerd[1455]: time="2025-05-17T00:21:11.793408600Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 17 00:21:11.793753 containerd[1455]: time="2025-05-17T00:21:11.793718630Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:11.795728 containerd[1455]: time="2025-05-17T00:21:11.795711229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:11.796729 containerd[1455]: time="2025-05-17T00:21:11.796540469Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.265825937s" May 17 00:21:11.796729 containerd[1455]: time="2025-05-17T00:21:11.796568789Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 17 00:21:11.797652 containerd[1455]: time="2025-05-17T00:21:11.797636508Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 00:21:12.389062 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4204237552.mount: Deactivated successfully. May 17 00:21:12.392723 containerd[1455]: time="2025-05-17T00:21:12.392695520Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:12.393374 containerd[1455]: time="2025-05-17T00:21:12.393328120Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 17 00:21:12.394559 containerd[1455]: time="2025-05-17T00:21:12.393520030Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:12.396301 containerd[1455]: time="2025-05-17T00:21:12.396273929Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:12.397255 containerd[1455]: time="2025-05-17T00:21:12.396913568Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 599.2174ms" May 17 00:21:12.397255 containerd[1455]: time="2025-05-17T00:21:12.396947538Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 17 00:21:12.397543 containerd[1455]: time="2025-05-17T00:21:12.397524098Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 17 00:21:13.100188 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount489508255.mount: Deactivated successfully. May 17 00:21:14.605245 containerd[1455]: time="2025-05-17T00:21:14.605177204Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:14.606269 containerd[1455]: time="2025-05-17T00:21:14.605937934Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" May 17 00:21:14.607976 containerd[1455]: time="2025-05-17T00:21:14.606637823Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:14.608694 containerd[1455]: time="2025-05-17T00:21:14.608663752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:14.609867 containerd[1455]: time="2025-05-17T00:21:14.609476872Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.211930514s" May 17 00:21:14.609867 containerd[1455]: time="2025-05-17T00:21:14.609501602Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 17 00:21:16.129063 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:21:16.140314 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:21:16.164387 systemd[1]: Reloading requested from client PID 2051 ('systemctl') (unit session-7.scope)... May 17 00:21:16.164402 systemd[1]: Reloading... May 17 00:21:16.275500 zram_generator::config[2097]: No configuration found. May 17 00:21:16.359731 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:21:16.411408 systemd[1]: Reloading finished in 246 ms. May 17 00:21:16.463490 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 17 00:21:16.463570 systemd[1]: kubelet.service: Failed with result 'signal'. May 17 00:21:16.463780 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:21:16.465492 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:21:16.591174 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:21:16.594667 (kubelet)[2146]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:21:16.625686 kubelet[2146]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:21:16.625686 kubelet[2146]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 00:21:16.625686 kubelet[2146]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:21:16.625686 kubelet[2146]: I0517 00:21:16.625286 2146 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:21:17.090291 kubelet[2146]: I0517 00:21:17.090264 2146 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 17 00:21:17.090395 kubelet[2146]: I0517 00:21:17.090384 2146 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:21:17.090635 kubelet[2146]: I0517 00:21:17.090620 2146 server.go:954] "Client rotation is on, will bootstrap in background" May 17 00:21:17.114220 kubelet[2146]: E0517 00:21:17.114194 2146 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.233.222.125:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.233.222.125:6443: connect: connection refused" logger="UnhandledError" May 17 00:21:17.114514 kubelet[2146]: I0517 00:21:17.114497 2146 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:21:17.122188 kubelet[2146]: E0517 00:21:17.120304 2146 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:21:17.122188 kubelet[2146]: I0517 00:21:17.120323 2146 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:21:17.122960 kubelet[2146]: I0517 00:21:17.122947 2146 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:21:17.123133 kubelet[2146]: I0517 00:21:17.123110 2146 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:21:17.123257 kubelet[2146]: I0517 00:21:17.123130 2146 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-233-222-125","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:21:17.123337 kubelet[2146]: I0517 00:21:17.123262 2146 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:21:17.123337 kubelet[2146]: I0517 00:21:17.123270 2146 container_manager_linux.go:304] "Creating device plugin manager" May 17 00:21:17.123370 kubelet[2146]: I0517 00:21:17.123354 2146 state_mem.go:36] "Initialized new in-memory state store" May 17 00:21:17.126972 kubelet[2146]: I0517 00:21:17.126879 2146 kubelet.go:446] "Attempting to sync node with API server" May 17 00:21:17.126972 kubelet[2146]: I0517 00:21:17.126900 2146 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:21:17.126972 kubelet[2146]: I0517 00:21:17.126914 2146 kubelet.go:352] "Adding apiserver pod source" May 17 00:21:17.126972 kubelet[2146]: I0517 00:21:17.126923 2146 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:21:17.131871 kubelet[2146]: W0517 00:21:17.131765 2146 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.233.222.125:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-233-222-125&limit=500&resourceVersion=0": dial tcp 172.233.222.125:6443: connect: connection refused May 17 00:21:17.131871 kubelet[2146]: E0517 00:21:17.131810 2146 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.233.222.125:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-233-222-125&limit=500&resourceVersion=0\": dial tcp 172.233.222.125:6443: connect: connection refused" logger="UnhandledError" May 17 00:21:17.131871 kubelet[2146]: I0517 00:21:17.131863 2146 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:21:17.132116 kubelet[2146]: I0517 00:21:17.132095 2146 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:21:17.133484 kubelet[2146]: W0517 00:21:17.133121 2146 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:21:17.136768 kubelet[2146]: I0517 00:21:17.136628 2146 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 00:21:17.136768 kubelet[2146]: I0517 00:21:17.136653 2146 server.go:1287] "Started kubelet" May 17 00:21:17.137142 kubelet[2146]: W0517 00:21:17.137114 2146 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.233.222.125:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.233.222.125:6443: connect: connection refused May 17 00:21:17.137185 kubelet[2146]: E0517 00:21:17.137147 2146 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.233.222.125:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.233.222.125:6443: connect: connection refused" logger="UnhandledError" May 17 00:21:17.137214 kubelet[2146]: I0517 00:21:17.137187 2146 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:21:17.138104 kubelet[2146]: I0517 00:21:17.138084 2146 server.go:479] "Adding debug handlers to kubelet server" May 17 00:21:17.141809 kubelet[2146]: I0517 00:21:17.141755 2146 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:21:17.142017 kubelet[2146]: I0517 00:21:17.142005 2146 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:21:17.142319 kubelet[2146]: I0517 00:21:17.142064 2146 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:21:17.143191 kubelet[2146]: E0517 00:21:17.142217 2146 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.233.222.125:6443/api/v1/namespaces/default/events\": dial tcp 172.233.222.125:6443: connect: connection refused" event="&Event{ObjectMeta:{172-233-222-125.184028954d959150 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-233-222-125,UID:172-233-222-125,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-233-222-125,},FirstTimestamp:2025-05-17 00:21:17.136638288 +0000 UTC m=+0.538811272,LastTimestamp:2025-05-17 00:21:17.136638288 +0000 UTC m=+0.538811272,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-233-222-125,}" May 17 00:21:17.144185 kubelet[2146]: I0517 00:21:17.143757 2146 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:21:17.146209 kubelet[2146]: E0517 00:21:17.146185 2146 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-233-222-125\" not found" May 17 00:21:17.146250 kubelet[2146]: I0517 00:21:17.146218 2146 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 00:21:17.146479 kubelet[2146]: I0517 00:21:17.146462 2146 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 00:21:17.146514 kubelet[2146]: I0517 00:21:17.146497 2146 reconciler.go:26] "Reconciler: start to sync state" May 17 00:21:17.146832 kubelet[2146]: W0517 00:21:17.146774 2146 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.233.222.125:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.233.222.125:6443: connect: connection refused May 17 00:21:17.146865 kubelet[2146]: E0517 00:21:17.146835 2146 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.233.222.125:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.233.222.125:6443: connect: connection refused" logger="UnhandledError" May 17 00:21:17.147108 kubelet[2146]: I0517 00:21:17.147090 2146 factory.go:221] Registration of the systemd container factory successfully May 17 00:21:17.147272 kubelet[2146]: I0517 00:21:17.147243 2146 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:21:17.147684 kubelet[2146]: E0517 00:21:17.147668 2146 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:21:17.147954 kubelet[2146]: E0517 00:21:17.147931 2146 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.233.222.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-233-222-125?timeout=10s\": dial tcp 172.233.222.125:6443: connect: connection refused" interval="200ms" May 17 00:21:17.148009 kubelet[2146]: I0517 00:21:17.147993 2146 factory.go:221] Registration of the containerd container factory successfully May 17 00:21:17.156947 kubelet[2146]: I0517 00:21:17.156919 2146 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:21:17.157885 kubelet[2146]: I0517 00:21:17.157861 2146 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:21:17.157885 kubelet[2146]: I0517 00:21:17.157880 2146 status_manager.go:227] "Starting to sync pod status with apiserver" May 17 00:21:17.157934 kubelet[2146]: I0517 00:21:17.157892 2146 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 00:21:17.157934 kubelet[2146]: I0517 00:21:17.157898 2146 kubelet.go:2382] "Starting kubelet main sync loop" May 17 00:21:17.157974 kubelet[2146]: E0517 00:21:17.157932 2146 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:21:17.164343 kubelet[2146]: W0517 00:21:17.164299 2146 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.233.222.125:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.233.222.125:6443: connect: connection refused May 17 00:21:17.164343 kubelet[2146]: E0517 00:21:17.164328 2146 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.233.222.125:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.233.222.125:6443: connect: connection refused" logger="UnhandledError" May 17 00:21:17.176119 kubelet[2146]: I0517 00:21:17.176106 2146 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 00:21:17.176119 kubelet[2146]: I0517 00:21:17.176116 2146 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 00:21:17.176205 kubelet[2146]: I0517 00:21:17.176128 2146 state_mem.go:36] "Initialized new in-memory state store" May 17 00:21:17.177614 kubelet[2146]: I0517 00:21:17.177602 2146 policy_none.go:49] "None policy: Start" May 17 00:21:17.177649 kubelet[2146]: I0517 00:21:17.177616 2146 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 00:21:17.177649 kubelet[2146]: I0517 00:21:17.177625 2146 state_mem.go:35] "Initializing new in-memory state store" May 17 00:21:17.184211 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 17 00:21:17.191931 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 17 00:21:17.195186 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 17 00:21:17.203919 kubelet[2146]: I0517 00:21:17.203908 2146 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:21:17.204234 kubelet[2146]: I0517 00:21:17.204035 2146 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:21:17.204234 kubelet[2146]: I0517 00:21:17.204046 2146 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:21:17.204234 kubelet[2146]: I0517 00:21:17.204227 2146 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:21:17.205033 kubelet[2146]: E0517 00:21:17.205021 2146 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 00:21:17.205108 kubelet[2146]: E0517 00:21:17.205099 2146 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-233-222-125\" not found" May 17 00:21:17.265149 systemd[1]: Created slice kubepods-burstable-pod35cf75c0f99c11032b6c10375197235c.slice - libcontainer container kubepods-burstable-pod35cf75c0f99c11032b6c10375197235c.slice. May 17 00:21:17.275737 kubelet[2146]: E0517 00:21:17.275716 2146 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-233-222-125\" not found" node="172-233-222-125" May 17 00:21:17.277449 systemd[1]: Created slice kubepods-burstable-pod1067a4309e3dc889fe72c980e4e16413.slice - libcontainer container kubepods-burstable-pod1067a4309e3dc889fe72c980e4e16413.slice. May 17 00:21:17.281081 kubelet[2146]: E0517 00:21:17.280935 2146 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-233-222-125\" not found" node="172-233-222-125" May 17 00:21:17.282773 systemd[1]: Created slice kubepods-burstable-podae4ba6ee6c1c9520b74f5621fa5c1350.slice - libcontainer container kubepods-burstable-podae4ba6ee6c1c9520b74f5621fa5c1350.slice. May 17 00:21:17.284045 kubelet[2146]: E0517 00:21:17.284033 2146 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-233-222-125\" not found" node="172-233-222-125" May 17 00:21:17.305944 kubelet[2146]: I0517 00:21:17.305918 2146 kubelet_node_status.go:75] "Attempting to register node" node="172-233-222-125" May 17 00:21:17.306153 kubelet[2146]: E0517 00:21:17.306134 2146 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.233.222.125:6443/api/v1/nodes\": dial tcp 172.233.222.125:6443: connect: connection refused" node="172-233-222-125" May 17 00:21:17.348517 kubelet[2146]: E0517 00:21:17.348465 2146 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.233.222.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-233-222-125?timeout=10s\": dial tcp 172.233.222.125:6443: connect: connection refused" interval="400ms" May 17 00:21:17.447962 kubelet[2146]: I0517 00:21:17.447900 2146 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ae4ba6ee6c1c9520b74f5621fa5c1350-k8s-certs\") pod \"kube-apiserver-172-233-222-125\" (UID: \"ae4ba6ee6c1c9520b74f5621fa5c1350\") " pod="kube-system/kube-apiserver-172-233-222-125" May 17 00:21:17.448004 kubelet[2146]: I0517 00:21:17.447956 2146 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/35cf75c0f99c11032b6c10375197235c-k8s-certs\") pod \"kube-controller-manager-172-233-222-125\" (UID: \"35cf75c0f99c11032b6c10375197235c\") " pod="kube-system/kube-controller-manager-172-233-222-125" May 17 00:21:17.448046 kubelet[2146]: I0517 00:21:17.448007 2146 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/35cf75c0f99c11032b6c10375197235c-kubeconfig\") pod \"kube-controller-manager-172-233-222-125\" (UID: \"35cf75c0f99c11032b6c10375197235c\") " pod="kube-system/kube-controller-manager-172-233-222-125" May 17 00:21:17.448046 kubelet[2146]: I0517 00:21:17.448030 2146 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/35cf75c0f99c11032b6c10375197235c-usr-share-ca-certificates\") pod \"kube-controller-manager-172-233-222-125\" (UID: \"35cf75c0f99c11032b6c10375197235c\") " pod="kube-system/kube-controller-manager-172-233-222-125" May 17 00:21:17.448093 kubelet[2146]: I0517 00:21:17.448050 2146 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1067a4309e3dc889fe72c980e4e16413-kubeconfig\") pod \"kube-scheduler-172-233-222-125\" (UID: \"1067a4309e3dc889fe72c980e4e16413\") " pod="kube-system/kube-scheduler-172-233-222-125" May 17 00:21:17.448093 kubelet[2146]: I0517 00:21:17.448070 2146 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ae4ba6ee6c1c9520b74f5621fa5c1350-ca-certs\") pod \"kube-apiserver-172-233-222-125\" (UID: \"ae4ba6ee6c1c9520b74f5621fa5c1350\") " pod="kube-system/kube-apiserver-172-233-222-125" May 17 00:21:17.448093 kubelet[2146]: I0517 00:21:17.448087 2146 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ae4ba6ee6c1c9520b74f5621fa5c1350-usr-share-ca-certificates\") pod \"kube-apiserver-172-233-222-125\" (UID: \"ae4ba6ee6c1c9520b74f5621fa5c1350\") " pod="kube-system/kube-apiserver-172-233-222-125" May 17 00:21:17.448148 kubelet[2146]: I0517 00:21:17.448105 2146 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/35cf75c0f99c11032b6c10375197235c-ca-certs\") pod \"kube-controller-manager-172-233-222-125\" (UID: \"35cf75c0f99c11032b6c10375197235c\") " pod="kube-system/kube-controller-manager-172-233-222-125" May 17 00:21:17.448148 kubelet[2146]: I0517 00:21:17.448122 2146 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/35cf75c0f99c11032b6c10375197235c-flexvolume-dir\") pod \"kube-controller-manager-172-233-222-125\" (UID: \"35cf75c0f99c11032b6c10375197235c\") " pod="kube-system/kube-controller-manager-172-233-222-125" May 17 00:21:17.510664 kubelet[2146]: I0517 00:21:17.510647 2146 kubelet_node_status.go:75] "Attempting to register node" node="172-233-222-125" May 17 00:21:17.510837 kubelet[2146]: E0517 00:21:17.510822 2146 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.233.222.125:6443/api/v1/nodes\": dial tcp 172.233.222.125:6443: connect: connection refused" node="172-233-222-125" May 17 00:21:17.576772 kubelet[2146]: E0517 00:21:17.576715 2146 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:21:17.577843 containerd[1455]: time="2025-05-17T00:21:17.577812347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-233-222-125,Uid:35cf75c0f99c11032b6c10375197235c,Namespace:kube-system,Attempt:0,}" May 17 00:21:17.582126 kubelet[2146]: E0517 00:21:17.582060 2146 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:21:17.583009 containerd[1455]: time="2025-05-17T00:21:17.582934865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-233-222-125,Uid:1067a4309e3dc889fe72c980e4e16413,Namespace:kube-system,Attempt:0,}" May 17 00:21:17.585278 kubelet[2146]: E0517 00:21:17.585247 2146 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:21:17.585753 containerd[1455]: time="2025-05-17T00:21:17.585576363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-233-222-125,Uid:ae4ba6ee6c1c9520b74f5621fa5c1350,Namespace:kube-system,Attempt:0,}" May 17 00:21:17.749441 kubelet[2146]: E0517 00:21:17.749415 2146 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.233.222.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-233-222-125?timeout=10s\": dial tcp 172.233.222.125:6443: connect: connection refused" interval="800ms" May 17 00:21:17.912907 kubelet[2146]: I0517 00:21:17.912876 2146 kubelet_node_status.go:75] "Attempting to register node" node="172-233-222-125" May 17 00:21:17.913203 kubelet[2146]: E0517 00:21:17.913185 2146 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.233.222.125:6443/api/v1/nodes\": dial tcp 172.233.222.125:6443: connect: connection refused" node="172-233-222-125" May 17 00:21:18.141392 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount894842689.mount: Deactivated successfully. May 17 00:21:18.145482 containerd[1455]: time="2025-05-17T00:21:18.145425383Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:21:18.146019 containerd[1455]: time="2025-05-17T00:21:18.145971823Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 17 00:21:18.146491 containerd[1455]: time="2025-05-17T00:21:18.146468933Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:21:18.147466 containerd[1455]: time="2025-05-17T00:21:18.147438562Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:21:18.148013 containerd[1455]: time="2025-05-17T00:21:18.147993972Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:21:18.148315 containerd[1455]: time="2025-05-17T00:21:18.148235232Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:21:18.148732 containerd[1455]: time="2025-05-17T00:21:18.148691022Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:21:18.150015 containerd[1455]: time="2025-05-17T00:21:18.149797791Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:21:18.152201 containerd[1455]: time="2025-05-17T00:21:18.151095891Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 565.473598ms" May 17 00:21:18.152201 containerd[1455]: time="2025-05-17T00:21:18.151834280Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 573.950273ms" May 17 00:21:18.153390 containerd[1455]: time="2025-05-17T00:21:18.153351139Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 570.315464ms" May 17 00:21:18.237672 containerd[1455]: time="2025-05-17T00:21:18.237598087Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:21:18.238217 containerd[1455]: time="2025-05-17T00:21:18.238049347Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:21:18.238639 containerd[1455]: time="2025-05-17T00:21:18.238514307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:18.238639 containerd[1455]: time="2025-05-17T00:21:18.238584787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:18.251884 containerd[1455]: time="2025-05-17T00:21:18.251822600Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:21:18.252010 containerd[1455]: time="2025-05-17T00:21:18.251943440Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:21:18.252073 containerd[1455]: time="2025-05-17T00:21:18.251985880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:18.252098 containerd[1455]: time="2025-05-17T00:21:18.252014830Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:21:18.252187 containerd[1455]: time="2025-05-17T00:21:18.252111780Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:21:18.252267 containerd[1455]: time="2025-05-17T00:21:18.252230950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:18.252357 containerd[1455]: time="2025-05-17T00:21:18.252299060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:18.252925 containerd[1455]: time="2025-05-17T00:21:18.252888990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:18.270316 systemd[1]: Started cri-containerd-d818b2206385b819b70afeb6d7dfc3e2e15d130ba122b033252932602104c5af.scope - libcontainer container d818b2206385b819b70afeb6d7dfc3e2e15d130ba122b033252932602104c5af. May 17 00:21:18.275989 systemd[1]: Started cri-containerd-8c6f4badf8dbed10eec57ee0f3f4ed4f886fa73f10ddc326f5ab7111cb0e27ab.scope - libcontainer container 8c6f4badf8dbed10eec57ee0f3f4ed4f886fa73f10ddc326f5ab7111cb0e27ab. May 17 00:21:18.279974 systemd[1]: Started cri-containerd-dc08d88b740666ab2b5d8cee35069659e28693490cbbcfbeedc362f007ea6213.scope - libcontainer container dc08d88b740666ab2b5d8cee35069659e28693490cbbcfbeedc362f007ea6213. May 17 00:21:18.316298 containerd[1455]: time="2025-05-17T00:21:18.316265138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-233-222-125,Uid:35cf75c0f99c11032b6c10375197235c,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c6f4badf8dbed10eec57ee0f3f4ed4f886fa73f10ddc326f5ab7111cb0e27ab\"" May 17 00:21:18.318425 kubelet[2146]: E0517 00:21:18.318403 2146 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:21:18.321077 containerd[1455]: time="2025-05-17T00:21:18.321013356Z" level=info msg="CreateContainer within sandbox \"8c6f4badf8dbed10eec57ee0f3f4ed4f886fa73f10ddc326f5ab7111cb0e27ab\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 00:21:18.336638 containerd[1455]: time="2025-05-17T00:21:18.336579888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-233-222-125,Uid:ae4ba6ee6c1c9520b74f5621fa5c1350,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc08d88b740666ab2b5d8cee35069659e28693490cbbcfbeedc362f007ea6213\"" May 17 00:21:18.338698 kubelet[2146]: E0517 00:21:18.338625 2146 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:21:18.342574 containerd[1455]: time="2025-05-17T00:21:18.342539075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-233-222-125,Uid:1067a4309e3dc889fe72c980e4e16413,Namespace:kube-system,Attempt:0,} returns sandbox id \"d818b2206385b819b70afeb6d7dfc3e2e15d130ba122b033252932602104c5af\"" May 17 00:21:18.343746 containerd[1455]: time="2025-05-17T00:21:18.343681554Z" level=info msg="CreateContainer within sandbox \"dc08d88b740666ab2b5d8cee35069659e28693490cbbcfbeedc362f007ea6213\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 00:21:18.344116 kubelet[2146]: E0517 00:21:18.343871 2146 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:21:18.345310 containerd[1455]: time="2025-05-17T00:21:18.345274853Z" level=info msg="CreateContainer within sandbox \"8c6f4badf8dbed10eec57ee0f3f4ed4f886fa73f10ddc326f5ab7111cb0e27ab\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"00d4736502fcedc63922541ea42b54d0e7e6aebf36bf3ed6ceb06c3c194b00a6\"" May 17 00:21:18.345554 containerd[1455]: time="2025-05-17T00:21:18.345524853Z" level=info msg="CreateContainer within sandbox \"d818b2206385b819b70afeb6d7dfc3e2e15d130ba122b033252932602104c5af\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 00:21:18.346253 containerd[1455]: time="2025-05-17T00:21:18.345888503Z" level=info msg="StartContainer for \"00d4736502fcedc63922541ea42b54d0e7e6aebf36bf3ed6ceb06c3c194b00a6\"" May 17 00:21:18.358053 kubelet[2146]: W0517 00:21:18.358001 2146 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.233.222.125:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.233.222.125:6443: connect: connection refused May 17 00:21:18.358137 kubelet[2146]: E0517 00:21:18.358075 2146 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.233.222.125:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.233.222.125:6443: connect: connection refused" logger="UnhandledError" May 17 00:21:18.358319 containerd[1455]: time="2025-05-17T00:21:18.358276747Z" level=info msg="CreateContainer within sandbox \"dc08d88b740666ab2b5d8cee35069659e28693490cbbcfbeedc362f007ea6213\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6e532e441edba48b091e3d9d5e64d68b8863d71b44e05e09e7580ffa69d66313\"" May 17 00:21:18.358634 containerd[1455]: time="2025-05-17T00:21:18.358611367Z" level=info msg="StartContainer for \"6e532e441edba48b091e3d9d5e64d68b8863d71b44e05e09e7580ffa69d66313\"" May 17 00:21:18.362339 containerd[1455]: time="2025-05-17T00:21:18.362260665Z" level=info msg="CreateContainer within sandbox \"d818b2206385b819b70afeb6d7dfc3e2e15d130ba122b033252932602104c5af\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7e6de9ce10ea98a39712df525dea2721aea1c19ef6fc7b09d7dcc25dc116b90e\"" May 17 00:21:18.362561 containerd[1455]: time="2025-05-17T00:21:18.362535225Z" level=info msg="StartContainer for \"7e6de9ce10ea98a39712df525dea2721aea1c19ef6fc7b09d7dcc25dc116b90e\"" May 17 00:21:18.376281 systemd[1]: Started cri-containerd-00d4736502fcedc63922541ea42b54d0e7e6aebf36bf3ed6ceb06c3c194b00a6.scope - libcontainer container 00d4736502fcedc63922541ea42b54d0e7e6aebf36bf3ed6ceb06c3c194b00a6. May 17 00:21:18.405273 systemd[1]: Started cri-containerd-6e532e441edba48b091e3d9d5e64d68b8863d71b44e05e09e7580ffa69d66313.scope - libcontainer container 6e532e441edba48b091e3d9d5e64d68b8863d71b44e05e09e7580ffa69d66313. May 17 00:21:18.410076 systemd[1]: Started cri-containerd-7e6de9ce10ea98a39712df525dea2721aea1c19ef6fc7b09d7dcc25dc116b90e.scope - libcontainer container 7e6de9ce10ea98a39712df525dea2721aea1c19ef6fc7b09d7dcc25dc116b90e. May 17 00:21:18.449338 containerd[1455]: time="2025-05-17T00:21:18.447322152Z" level=info msg="StartContainer for \"00d4736502fcedc63922541ea42b54d0e7e6aebf36bf3ed6ceb06c3c194b00a6\" returns successfully" May 17 00:21:18.450441 containerd[1455]: time="2025-05-17T00:21:18.450423381Z" level=info msg="StartContainer for \"7e6de9ce10ea98a39712df525dea2721aea1c19ef6fc7b09d7dcc25dc116b90e\" returns successfully" May 17 00:21:18.481498 containerd[1455]: time="2025-05-17T00:21:18.481458455Z" level=info msg="StartContainer for \"6e532e441edba48b091e3d9d5e64d68b8863d71b44e05e09e7580ffa69d66313\" returns successfully" May 17 00:21:18.715504 kubelet[2146]: I0517 00:21:18.715472 2146 kubelet_node_status.go:75] "Attempting to register node" node="172-233-222-125" May 17 00:21:19.180076 kubelet[2146]: E0517 00:21:19.180040 2146 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-233-222-125\" not found" node="172-233-222-125" May 17 00:21:19.181154 kubelet[2146]: E0517 00:21:19.181123 2146 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:21:19.181600 kubelet[2146]: E0517 00:21:19.181579 2146 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-233-222-125\" not found" node="172-233-222-125" May 17 00:21:19.181680 kubelet[2146]: E0517 00:21:19.181660 2146 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:21:19.183468 kubelet[2146]: E0517 00:21:19.183440 2146 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-233-222-125\" not found" node="172-233-222-125" May 17 00:21:19.183544 kubelet[2146]: E0517 00:21:19.183524 2146 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:21:19.620015 kubelet[2146]: E0517 00:21:19.619845 2146 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-233-222-125\" not found" node="172-233-222-125" May 17 00:21:19.705033 kubelet[2146]: I0517 00:21:19.704999 2146 kubelet_node_status.go:78] "Successfully registered node" node="172-233-222-125" May 17 00:21:19.705089 kubelet[2146]: E0517 00:21:19.705056 2146 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"172-233-222-125\": node \"172-233-222-125\" not found" May 17 00:21:19.748150 kubelet[2146]: I0517 00:21:19.747918 2146 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-233-222-125" May 17 00:21:19.752436 kubelet[2146]: E0517 00:21:19.752418 2146 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-233-222-125\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-233-222-125" May 17 00:21:19.752503 kubelet[2146]: I0517 00:21:19.752493 2146 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-233-222-125" May 17 00:21:19.753523 kubelet[2146]: E0517 00:21:19.753426 2146 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-233-222-125\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-233-222-125" May 17 00:21:19.753523 kubelet[2146]: I0517 00:21:19.753440 2146 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-233-222-125" May 17 00:21:19.755891 kubelet[2146]: E0517 00:21:19.755878 2146 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-233-222-125\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-233-222-125" May 17 00:21:20.137590 kubelet[2146]: I0517 00:21:20.137558 2146 apiserver.go:52] "Watching apiserver" May 17 00:21:20.146916 kubelet[2146]: I0517 00:21:20.146888 2146 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 17 00:21:20.184103 kubelet[2146]: I0517 00:21:20.184091 2146 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-233-222-125" May 17 00:21:20.184382 kubelet[2146]: I0517 00:21:20.184327 2146 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-233-222-125" May 17 00:21:20.185477 kubelet[2146]: E0517 00:21:20.185451 2146 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-233-222-125\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-233-222-125" May 17 00:21:20.185573 kubelet[2146]: E0517 00:21:20.185562 2146 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:21:20.185739 kubelet[2146]: E0517 00:21:20.185724 2146 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-233-222-125\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-233-222-125" May 17 00:21:20.185822 kubelet[2146]: E0517 00:21:20.185810 2146 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:21:21.184779 kubelet[2146]: I0517 00:21:21.184753 2146 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-233-222-125" May 17 00:21:21.185132 kubelet[2146]: I0517 00:21:21.185028 2146 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-233-222-125" May 17 00:21:21.188698 kubelet[2146]: E0517 00:21:21.188674 2146 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:21:21.193111 kubelet[2146]: E0517 00:21:21.193086 2146 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:21:21.539907 systemd[1]: Reloading requested from client PID 2418 ('systemctl') (unit session-7.scope)... May 17 00:21:21.539924 systemd[1]: Reloading... May 17 00:21:21.643221 zram_generator::config[2467]: No configuration found. May 17 00:21:21.738994 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:21:21.815688 systemd[1]: Reloading finished in 275 ms. May 17 00:21:21.856113 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:21:21.868701 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:21:21.869009 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:21:21.875360 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:21:22.028771 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:21:22.033089 (kubelet)[2509]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:21:22.088273 kubelet[2509]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:21:22.088273 kubelet[2509]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 00:21:22.088273 kubelet[2509]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:21:22.088273 kubelet[2509]: I0517 00:21:22.087589 2509 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:21:22.100572 kubelet[2509]: I0517 00:21:22.100535 2509 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 17 00:21:22.100572 kubelet[2509]: I0517 00:21:22.100561 2509 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:21:22.101944 kubelet[2509]: I0517 00:21:22.100945 2509 server.go:954] "Client rotation is on, will bootstrap in background" May 17 00:21:22.103262 kubelet[2509]: I0517 00:21:22.103244 2509 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 17 00:21:22.106468 kubelet[2509]: I0517 00:21:22.106043 2509 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:21:22.110736 kubelet[2509]: E0517 00:21:22.110712 2509 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:21:22.110736 kubelet[2509]: I0517 00:21:22.110733 2509 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:21:22.115173 kubelet[2509]: I0517 00:21:22.114411 2509 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:21:22.115173 kubelet[2509]: I0517 00:21:22.114605 2509 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:21:22.115173 kubelet[2509]: I0517 00:21:22.114631 2509 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-233-222-125","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:21:22.115173 kubelet[2509]: I0517 00:21:22.115048 2509 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:21:22.115343 kubelet[2509]: I0517 00:21:22.115057 2509 container_manager_linux.go:304] "Creating device plugin manager" May 17 00:21:22.115343 kubelet[2509]: I0517 00:21:22.115095 2509 state_mem.go:36] "Initialized new in-memory state store" May 17 00:21:22.115343 kubelet[2509]: I0517 00:21:22.115238 2509 kubelet.go:446] "Attempting to sync node with API server" May 17 00:21:22.115343 kubelet[2509]: I0517 00:21:22.115255 2509 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:21:22.115343 kubelet[2509]: I0517 00:21:22.115269 2509 kubelet.go:352] "Adding apiserver pod source" May 17 00:21:22.115343 kubelet[2509]: I0517 00:21:22.115278 2509 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:21:22.117549 kubelet[2509]: I0517 00:21:22.117532 2509 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:21:22.117988 kubelet[2509]: I0517 00:21:22.117972 2509 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:21:22.120318 kubelet[2509]: I0517 00:21:22.119908 2509 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 00:21:22.120318 kubelet[2509]: I0517 00:21:22.119941 2509 server.go:1287] "Started kubelet" May 17 00:21:22.121435 kubelet[2509]: I0517 00:21:22.121273 2509 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:21:22.135737 kubelet[2509]: E0517 00:21:22.135650 2509 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:21:22.136037 kubelet[2509]: I0517 00:21:22.136005 2509 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:21:22.136711 kubelet[2509]: I0517 00:21:22.136690 2509 server.go:479] "Adding debug handlers to kubelet server" May 17 00:21:22.137423 kubelet[2509]: I0517 00:21:22.137276 2509 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 00:21:22.137563 kubelet[2509]: I0517 00:21:22.137233 2509 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:21:22.138226 kubelet[2509]: I0517 00:21:22.138180 2509 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:21:22.140094 kubelet[2509]: I0517 00:21:22.139675 2509 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:21:22.140094 kubelet[2509]: I0517 00:21:22.139818 2509 factory.go:221] Registration of the systemd container factory successfully May 17 00:21:22.140094 kubelet[2509]: I0517 00:21:22.139886 2509 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:21:22.140413 kubelet[2509]: I0517 00:21:22.140403 2509 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 00:21:22.140840 kubelet[2509]: I0517 00:21:22.140820 2509 reconciler.go:26] "Reconciler: start to sync state" May 17 00:21:22.141824 kubelet[2509]: I0517 00:21:22.141521 2509 factory.go:221] Registration of the containerd container factory successfully May 17 00:21:22.147372 kubelet[2509]: I0517 00:21:22.147354 2509 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:21:22.149513 kubelet[2509]: I0517 00:21:22.149444 2509 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:21:22.149513 kubelet[2509]: I0517 00:21:22.149467 2509 status_manager.go:227] "Starting to sync pod status with apiserver" May 17 00:21:22.149513 kubelet[2509]: I0517 00:21:22.149482 2509 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 00:21:22.149513 kubelet[2509]: I0517 00:21:22.149488 2509 kubelet.go:2382] "Starting kubelet main sync loop" May 17 00:21:22.149620 kubelet[2509]: E0517 00:21:22.149523 2509 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:21:22.178501 kubelet[2509]: I0517 00:21:22.178462 2509 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 00:21:22.178501 kubelet[2509]: I0517 00:21:22.178479 2509 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 00:21:22.178501 kubelet[2509]: I0517 00:21:22.178496 2509 state_mem.go:36] "Initialized new in-memory state store" May 17 00:21:22.178685 kubelet[2509]: I0517 00:21:22.178622 2509 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 00:21:22.178685 kubelet[2509]: I0517 00:21:22.178637 2509 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 00:21:22.178685 kubelet[2509]: I0517 00:21:22.178653 2509 policy_none.go:49] "None policy: Start" May 17 00:21:22.178685 kubelet[2509]: I0517 00:21:22.178662 2509 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 00:21:22.178685 kubelet[2509]: I0517 00:21:22.178670 2509 state_mem.go:35] "Initializing new in-memory state store" May 17 00:21:22.178767 kubelet[2509]: I0517 00:21:22.178743 2509 state_mem.go:75] "Updated machine memory state" May 17 00:21:22.182939 kubelet[2509]: I0517 00:21:22.182264 2509 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:21:22.182939 kubelet[2509]: I0517 00:21:22.182402 2509 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:21:22.182939 kubelet[2509]: I0517 00:21:22.182412 2509 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:21:22.182939 kubelet[2509]: I0517 00:21:22.182828 2509 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:21:22.185021 kubelet[2509]: E0517 00:21:22.184624 2509 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 00:21:22.250550 kubelet[2509]: I0517 00:21:22.250483 2509 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-233-222-125" May 17 00:21:22.250757 kubelet[2509]: I0517 00:21:22.250746 2509 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-233-222-125" May 17 00:21:22.250930 kubelet[2509]: I0517 00:21:22.250919 2509 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-233-222-125" May 17 00:21:22.256488 kubelet[2509]: E0517 00:21:22.256466 2509 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-233-222-125\" already exists" pod="kube-system/kube-apiserver-172-233-222-125" May 17 00:21:22.256744 kubelet[2509]: E0517 00:21:22.256677 2509 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-233-222-125\" already exists" pod="kube-system/kube-scheduler-172-233-222-125" May 17 00:21:22.288973 kubelet[2509]: I0517 00:21:22.288959 2509 kubelet_node_status.go:75] "Attempting to register node" node="172-233-222-125" May 17 00:21:22.293498 kubelet[2509]: I0517 00:21:22.293481 2509 kubelet_node_status.go:124] "Node was previously registered" node="172-233-222-125" May 17 00:21:22.293565 kubelet[2509]: I0517 00:21:22.293526 2509 kubelet_node_status.go:78] "Successfully registered node" node="172-233-222-125" May 17 00:21:22.344144 kubelet[2509]: I0517 00:21:22.343544 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/35cf75c0f99c11032b6c10375197235c-ca-certs\") pod \"kube-controller-manager-172-233-222-125\" (UID: \"35cf75c0f99c11032b6c10375197235c\") " pod="kube-system/kube-controller-manager-172-233-222-125" May 17 00:21:22.344144 kubelet[2509]: I0517 00:21:22.343575 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/35cf75c0f99c11032b6c10375197235c-flexvolume-dir\") pod \"kube-controller-manager-172-233-222-125\" (UID: \"35cf75c0f99c11032b6c10375197235c\") " pod="kube-system/kube-controller-manager-172-233-222-125" May 17 00:21:22.344144 kubelet[2509]: I0517 00:21:22.343620 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/35cf75c0f99c11032b6c10375197235c-kubeconfig\") pod \"kube-controller-manager-172-233-222-125\" (UID: \"35cf75c0f99c11032b6c10375197235c\") " pod="kube-system/kube-controller-manager-172-233-222-125" May 17 00:21:22.344144 kubelet[2509]: I0517 00:21:22.343645 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/35cf75c0f99c11032b6c10375197235c-usr-share-ca-certificates\") pod \"kube-controller-manager-172-233-222-125\" (UID: \"35cf75c0f99c11032b6c10375197235c\") " pod="kube-system/kube-controller-manager-172-233-222-125" May 17 00:21:22.344144 kubelet[2509]: I0517 00:21:22.343663 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1067a4309e3dc889fe72c980e4e16413-kubeconfig\") pod \"kube-scheduler-172-233-222-125\" (UID: \"1067a4309e3dc889fe72c980e4e16413\") " pod="kube-system/kube-scheduler-172-233-222-125" May 17 00:21:22.344272 kubelet[2509]: I0517 00:21:22.343679 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ae4ba6ee6c1c9520b74f5621fa5c1350-ca-certs\") pod \"kube-apiserver-172-233-222-125\" (UID: \"ae4ba6ee6c1c9520b74f5621fa5c1350\") " pod="kube-system/kube-apiserver-172-233-222-125" May 17 00:21:22.344272 kubelet[2509]: I0517 00:21:22.343726 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ae4ba6ee6c1c9520b74f5621fa5c1350-k8s-certs\") pod \"kube-apiserver-172-233-222-125\" (UID: \"ae4ba6ee6c1c9520b74f5621fa5c1350\") " pod="kube-system/kube-apiserver-172-233-222-125" May 17 00:21:22.344272 kubelet[2509]: I0517 00:21:22.343749 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ae4ba6ee6c1c9520b74f5621fa5c1350-usr-share-ca-certificates\") pod \"kube-apiserver-172-233-222-125\" (UID: \"ae4ba6ee6c1c9520b74f5621fa5c1350\") " pod="kube-system/kube-apiserver-172-233-222-125" May 17 00:21:22.344272 kubelet[2509]: I0517 00:21:22.343766 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/35cf75c0f99c11032b6c10375197235c-k8s-certs\") pod \"kube-controller-manager-172-233-222-125\" (UID: \"35cf75c0f99c11032b6c10375197235c\") " pod="kube-system/kube-controller-manager-172-233-222-125" May 17 00:21:22.557455 kubelet[2509]: E0517 00:21:22.557307 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:21:22.557744 kubelet[2509]: E0517 00:21:22.557722 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:21:22.557779 kubelet[2509]: E0517 00:21:22.557727 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:21:23.116515 kubelet[2509]: I0517 00:21:23.116352 2509 apiserver.go:52] "Watching apiserver" May 17 00:21:23.141192 kubelet[2509]: I0517 00:21:23.141153 2509 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 17 00:21:23.168514 kubelet[2509]: E0517 00:21:23.168184 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:21:23.168514 kubelet[2509]: I0517 00:21:23.168204 2509 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-233-222-125" May 17 00:21:23.168514 kubelet[2509]: I0517 00:21:23.168505 2509 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-233-222-125" May 17 00:21:23.177302 kubelet[2509]: E0517 00:21:23.177236 2509 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-233-222-125\" already exists" pod="kube-system/kube-scheduler-172-233-222-125" May 17 00:21:23.177385 kubelet[2509]: E0517 00:21:23.177281 2509 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-233-222-125\" already exists" pod="kube-system/kube-apiserver-172-233-222-125" May 17 00:21:23.177471 kubelet[2509]: E0517 00:21:23.177453 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:21:23.177769 kubelet[2509]: E0517 00:21:23.177753 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:21:23.189433 kubelet[2509]: I0517 00:21:23.189332 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-233-222-125" podStartSLOduration=1.189324091 podStartE2EDuration="1.189324091s" podCreationTimestamp="2025-05-17 00:21:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:21:23.188433981 +0000 UTC m=+1.152076805" watchObservedRunningTime="2025-05-17 00:21:23.189324091 +0000 UTC m=+1.152966925" May 17 00:21:23.201986 kubelet[2509]: I0517 00:21:23.201701 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-233-222-125" podStartSLOduration=2.201693475 podStartE2EDuration="2.201693475s" podCreationTimestamp="2025-05-17 00:21:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:21:23.201581625 +0000 UTC m=+1.165224449" watchObservedRunningTime="2025-05-17 00:21:23.201693475 +0000 UTC m=+1.165336319" May 17 00:21:23.201986 kubelet[2509]: I0517 00:21:23.201791 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-233-222-125" podStartSLOduration=2.201787485 podStartE2EDuration="2.201787485s" podCreationTimestamp="2025-05-17 00:21:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:21:23.194446778 +0000 UTC m=+1.158089612" watchObservedRunningTime="2025-05-17 00:21:23.201787485 +0000 UTC m=+1.165430319" May 17 00:21:23.717448 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 17 00:21:24.169781 kubelet[2509]: E0517 00:21:24.169676 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:21:24.170516 kubelet[2509]: E0517 00:21:24.170132 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:21:24.338049 kubelet[2509]: E0517 00:21:24.338023 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:21:25.171994 kubelet[2509]: E0517 00:21:25.171595 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:21:27.837225 kubelet[2509]: E0517 00:21:27.837065 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:21:28.553521 kubelet[2509]: I0517 00:21:28.553485 2509 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 00:21:28.553753 containerd[1455]: time="2025-05-17T00:21:28.553728018Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:21:28.554212 kubelet[2509]: I0517 00:21:28.553852 2509 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 00:21:29.518452 kubelet[2509]: I0517 00:21:29.518362 2509 status_manager.go:890] "Failed to get status for pod" podUID="eba451b0-171f-4b92-a580-e97c2829e861" pod="kube-system/kube-proxy-knzh9" err="pods \"kube-proxy-knzh9\" is forbidden: User \"system:node:172-233-222-125\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-233-222-125' and this object" May 17 00:21:29.525292 systemd[1]: Created slice kubepods-besteffort-podeba451b0_171f_4b92_a580_e97c2829e861.slice - libcontainer container kubepods-besteffort-podeba451b0_171f_4b92_a580_e97c2829e861.slice. May 17 00:21:29.591264 kubelet[2509]: I0517 00:21:29.591209 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/eba451b0-171f-4b92-a580-e97c2829e861-kube-proxy\") pod \"kube-proxy-knzh9\" (UID: \"eba451b0-171f-4b92-a580-e97c2829e861\") " pod="kube-system/kube-proxy-knzh9" May 17 00:21:29.591322 kubelet[2509]: I0517 00:21:29.591268 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eba451b0-171f-4b92-a580-e97c2829e861-lib-modules\") pod \"kube-proxy-knzh9\" (UID: \"eba451b0-171f-4b92-a580-e97c2829e861\") " pod="kube-system/kube-proxy-knzh9" May 17 00:21:29.591322 kubelet[2509]: I0517 00:21:29.591289 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eba451b0-171f-4b92-a580-e97c2829e861-xtables-lock\") pod \"kube-proxy-knzh9\" (UID: \"eba451b0-171f-4b92-a580-e97c2829e861\") " pod="kube-system/kube-proxy-knzh9" May 17 00:21:29.591322 kubelet[2509]: I0517 00:21:29.591310 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t56q8\" (UniqueName: \"kubernetes.io/projected/eba451b0-171f-4b92-a580-e97c2829e861-kube-api-access-t56q8\") pod \"kube-proxy-knzh9\" (UID: \"eba451b0-171f-4b92-a580-e97c2829e861\") " pod="kube-system/kube-proxy-knzh9" May 17 00:21:29.675554 kubelet[2509]: W0517 00:21:29.675507 2509 reflector.go:569] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:172-233-222-125" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node '172-233-222-125' and this object May 17 00:21:29.675675 kubelet[2509]: E0517 00:21:29.675566 2509 reflector.go:166] "Unhandled Error" err="object-\"tigera-operator\"/\"kubernetes-services-endpoint\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kubernetes-services-endpoint\" is forbidden: User \"system:node:172-233-222-125\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node '172-233-222-125' and this object" logger="UnhandledError" May 17 00:21:29.675675 kubelet[2509]: W0517 00:21:29.675611 2509 reflector.go:569] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:172-233-222-125" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node '172-233-222-125' and this object May 17 00:21:29.675675 kubelet[2509]: E0517 00:21:29.675622 2509 reflector.go:166] "Unhandled Error" err="object-\"tigera-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:172-233-222-125\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node '172-233-222-125' and this object" logger="UnhandledError" May 17 00:21:29.678120 systemd[1]: Created slice kubepods-besteffort-pod83b644a0_3ae2_487c_bb51_23b8fdf8f476.slice - libcontainer container kubepods-besteffort-pod83b644a0_3ae2_487c_bb51_23b8fdf8f476.slice. May 17 00:21:29.693674 kubelet[2509]: I0517 00:21:29.693457 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s9jk\" (UniqueName: \"kubernetes.io/projected/83b644a0-3ae2-487c-bb51-23b8fdf8f476-kube-api-access-6s9jk\") pod \"tigera-operator-844669ff44-gxztm\" (UID: \"83b644a0-3ae2-487c-bb51-23b8fdf8f476\") " pod="tigera-operator/tigera-operator-844669ff44-gxztm" May 17 00:21:29.693674 kubelet[2509]: I0517 00:21:29.693557 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/83b644a0-3ae2-487c-bb51-23b8fdf8f476-var-lib-calico\") pod \"tigera-operator-844669ff44-gxztm\" (UID: \"83b644a0-3ae2-487c-bb51-23b8fdf8f476\") " pod="tigera-operator/tigera-operator-844669ff44-gxztm" May 17 00:21:29.833493 kubelet[2509]: E0517 00:21:29.832225 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:21:29.833569 containerd[1455]: time="2025-05-17T00:21:29.833353748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-knzh9,Uid:eba451b0-171f-4b92-a580-e97c2829e861,Namespace:kube-system,Attempt:0,}" May 17 00:21:29.857542 containerd[1455]: time="2025-05-17T00:21:29.857302506Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:21:29.857810 containerd[1455]: time="2025-05-17T00:21:29.857369146Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:21:29.857810 containerd[1455]: time="2025-05-17T00:21:29.857675436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:29.858779 containerd[1455]: time="2025-05-17T00:21:29.858671485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:29.881035 systemd[1]: run-containerd-runc-k8s.io-431eec0e3935cb155470f3bb1230155852f98d0437b0de33186d0ba054117467-runc.G8Np6j.mount: Deactivated successfully. May 17 00:21:29.891312 systemd[1]: Started cri-containerd-431eec0e3935cb155470f3bb1230155852f98d0437b0de33186d0ba054117467.scope - libcontainer container 431eec0e3935cb155470f3bb1230155852f98d0437b0de33186d0ba054117467. May 17 00:21:29.919458 containerd[1455]: time="2025-05-17T00:21:29.919318365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-knzh9,Uid:eba451b0-171f-4b92-a580-e97c2829e861,Namespace:kube-system,Attempt:0,} returns sandbox id \"431eec0e3935cb155470f3bb1230155852f98d0437b0de33186d0ba054117467\"" May 17 00:21:29.920547 kubelet[2509]: E0517 00:21:29.920525 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:21:29.925770 containerd[1455]: time="2025-05-17T00:21:29.925292082Z" level=info msg="CreateContainer within sandbox \"431eec0e3935cb155470f3bb1230155852f98d0437b0de33186d0ba054117467\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:21:29.936588 containerd[1455]: time="2025-05-17T00:21:29.936545276Z" level=info msg="CreateContainer within sandbox \"431eec0e3935cb155470f3bb1230155852f98d0437b0de33186d0ba054117467\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"40682b89cac203af0d87eace18316ce5bf54add4348a8b4682a4caea896e5cf8\"" May 17 00:21:29.938028 containerd[1455]: time="2025-05-17T00:21:29.937030116Z" level=info msg="StartContainer for \"40682b89cac203af0d87eace18316ce5bf54add4348a8b4682a4caea896e5cf8\"" May 17 00:21:29.975335 systemd[1]: Started cri-containerd-40682b89cac203af0d87eace18316ce5bf54add4348a8b4682a4caea896e5cf8.scope - libcontainer container 40682b89cac203af0d87eace18316ce5bf54add4348a8b4682a4caea896e5cf8. May 17 00:21:30.005108 containerd[1455]: time="2025-05-17T00:21:30.005063862Z" level=info msg="StartContainer for \"40682b89cac203af0d87eace18316ce5bf54add4348a8b4682a4caea896e5cf8\" returns successfully" May 17 00:21:30.181129 kubelet[2509]: E0517 00:21:30.180679 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:21:30.188662 kubelet[2509]: I0517 00:21:30.188297 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-knzh9" podStartSLOduration=1.188274521 podStartE2EDuration="1.188274521s" podCreationTimestamp="2025-05-17 00:21:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:21:30.187568931 +0000 UTC m=+8.151211755" watchObservedRunningTime="2025-05-17 00:21:30.188274521 +0000 UTC m=+8.151917365" May 17 00:21:30.800505 kubelet[2509]: E0517 00:21:30.800455 2509 projected.go:288] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition May 17 00:21:30.800505 kubelet[2509]: E0517 00:21:30.800507 2509 projected.go:194] Error preparing data for projected volume kube-api-access-6s9jk for pod tigera-operator/tigera-operator-844669ff44-gxztm: failed to sync configmap cache: timed out waiting for the condition May 17 00:21:30.800964 kubelet[2509]: E0517 00:21:30.800599 2509 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/83b644a0-3ae2-487c-bb51-23b8fdf8f476-kube-api-access-6s9jk podName:83b644a0-3ae2-487c-bb51-23b8fdf8f476 nodeName:}" failed. No retries permitted until 2025-05-17 00:21:31.300570534 +0000 UTC m=+9.264213368 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6s9jk" (UniqueName: "kubernetes.io/projected/83b644a0-3ae2-487c-bb51-23b8fdf8f476-kube-api-access-6s9jk") pod "tigera-operator-844669ff44-gxztm" (UID: "83b644a0-3ae2-487c-bb51-23b8fdf8f476") : failed to sync configmap cache: timed out waiting for the condition May 17 00:21:31.177532 kubelet[2509]: E0517 00:21:31.176773 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:21:31.183364 kubelet[2509]: E0517 00:21:31.183314 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:21:31.483045 containerd[1455]: time="2025-05-17T00:21:31.482985063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-gxztm,Uid:83b644a0-3ae2-487c-bb51-23b8fdf8f476,Namespace:tigera-operator,Attempt:0,}" May 17 00:21:31.501692 containerd[1455]: time="2025-05-17T00:21:31.501512724Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:21:31.501692 containerd[1455]: time="2025-05-17T00:21:31.501550054Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:21:31.501692 containerd[1455]: time="2025-05-17T00:21:31.501561324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:31.501692 containerd[1455]: time="2025-05-17T00:21:31.501643374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:31.525274 systemd[1]: Started cri-containerd-54d3af9a1f88754e69c111c850d2f4ebc7eecbcc53ad0396de923c4a6838f301.scope - libcontainer container 54d3af9a1f88754e69c111c850d2f4ebc7eecbcc53ad0396de923c4a6838f301. May 17 00:21:31.556310 containerd[1455]: time="2025-05-17T00:21:31.556273556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-gxztm,Uid:83b644a0-3ae2-487c-bb51-23b8fdf8f476,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"54d3af9a1f88754e69c111c850d2f4ebc7eecbcc53ad0396de923c4a6838f301\"" May 17 00:21:31.558147 containerd[1455]: time="2025-05-17T00:21:31.558023395Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 17 00:21:32.704331 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount90752440.mount: Deactivated successfully. May 17 00:21:33.622274 systemd-resolved[1333]: Clock change detected. Flushing caches. May 17 00:21:33.622513 systemd-timesyncd[1352]: Contacted time server [2600:1702:7400:9ac0::5b]:123 (2.flatcar.pool.ntp.org). May 17 00:21:33.622551 systemd-timesyncd[1352]: Initial clock synchronization to Sat 2025-05-17 00:21:33.622233 UTC. May 17 00:21:33.790570 containerd[1455]: time="2025-05-17T00:21:33.790192696Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:33.791218 containerd[1455]: time="2025-05-17T00:21:33.791082376Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=25055451" May 17 00:21:33.792608 containerd[1455]: time="2025-05-17T00:21:33.791582216Z" level=info msg="ImageCreate event name:\"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:33.793206 containerd[1455]: time="2025-05-17T00:21:33.793059125Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:33.793804 containerd[1455]: time="2025-05-17T00:21:33.793690344Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"25051446\" in 1.545626667s" May 17 00:21:33.793804 containerd[1455]: time="2025-05-17T00:21:33.793716364Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\"" May 17 00:21:33.796449 containerd[1455]: time="2025-05-17T00:21:33.796430723Z" level=info msg="CreateContainer within sandbox \"54d3af9a1f88754e69c111c850d2f4ebc7eecbcc53ad0396de923c4a6838f301\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 17 00:21:33.806599 containerd[1455]: time="2025-05-17T00:21:33.806580728Z" level=info msg="CreateContainer within sandbox \"54d3af9a1f88754e69c111c850d2f4ebc7eecbcc53ad0396de923c4a6838f301\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"59e20f3c4d7a92df4b2f94f97043196833fd364f40ae3776d0732756a1f1c760\"" May 17 00:21:33.807278 containerd[1455]: time="2025-05-17T00:21:33.807261088Z" level=info msg="StartContainer for \"59e20f3c4d7a92df4b2f94f97043196833fd364f40ae3776d0732756a1f1c760\"" May 17 00:21:33.840283 systemd[1]: Started cri-containerd-59e20f3c4d7a92df4b2f94f97043196833fd364f40ae3776d0732756a1f1c760.scope - libcontainer container 59e20f3c4d7a92df4b2f94f97043196833fd364f40ae3776d0732756a1f1c760. May 17 00:21:33.859678 containerd[1455]: time="2025-05-17T00:21:33.859651121Z" level=info msg="StartContainer for \"59e20f3c4d7a92df4b2f94f97043196833fd364f40ae3776d0732756a1f1c760\" returns successfully" May 17 00:21:35.035358 kubelet[2509]: E0517 00:21:35.035317 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:21:35.060590 kubelet[2509]: I0517 00:21:35.059954 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-844669ff44-gxztm" podStartSLOduration=4.512925575 podStartE2EDuration="6.059934111s" podCreationTimestamp="2025-05-17 00:21:29 +0000 UTC" firstStartedPulling="2025-05-17 00:21:31.557221506 +0000 UTC m=+9.520864340" lastFinishedPulling="2025-05-17 00:21:33.794245834 +0000 UTC m=+11.067872876" observedRunningTime="2025-05-17 00:21:33.891072066 +0000 UTC m=+11.164699108" watchObservedRunningTime="2025-05-17 00:21:35.059934111 +0000 UTC m=+12.333561153" May 17 00:21:38.530566 kubelet[2509]: E0517 00:21:38.530531 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:21:38.888484 kubelet[2509]: E0517 00:21:38.888434 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:21:38.931763 sudo[1674]: pam_unix(sudo:session): session closed for user root May 17 00:21:38.986372 sshd[1671]: pam_unix(sshd:session): session closed for user core May 17 00:21:38.990814 systemd-logind[1437]: Session 7 logged out. Waiting for processes to exit. May 17 00:21:38.991747 systemd[1]: sshd@6-172.233.222.125:22-139.178.89.65:49028.service: Deactivated successfully. May 17 00:21:38.994952 systemd[1]: session-7.scope: Deactivated successfully. May 17 00:21:38.995581 systemd[1]: session-7.scope: Consumed 3.122s CPU time, 155.5M memory peak, 0B memory swap peak. May 17 00:21:38.996484 systemd-logind[1437]: Removed session 7. May 17 00:21:39.531780 update_engine[1441]: I20250517 00:21:39.531715 1441 update_attempter.cc:509] Updating boot flags... May 17 00:21:39.609209 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2908) May 17 00:21:39.688097 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2903) May 17 00:21:39.775082 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2903) May 17 00:21:41.901332 systemd[1]: Created slice kubepods-besteffort-pod30ab6d78_e16a_4a41_bb43_1eb5d9902b58.slice - libcontainer container kubepods-besteffort-pod30ab6d78_e16a_4a41_bb43_1eb5d9902b58.slice. May 17 00:21:41.957108 kubelet[2509]: I0517 00:21:41.957071 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30ab6d78-e16a-4a41-bb43-1eb5d9902b58-tigera-ca-bundle\") pod \"calico-typha-7547c65dcc-xlzg8\" (UID: \"30ab6d78-e16a-4a41-bb43-1eb5d9902b58\") " pod="calico-system/calico-typha-7547c65dcc-xlzg8" May 17 00:21:41.957108 kubelet[2509]: I0517 00:21:41.957108 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/30ab6d78-e16a-4a41-bb43-1eb5d9902b58-typha-certs\") pod \"calico-typha-7547c65dcc-xlzg8\" (UID: \"30ab6d78-e16a-4a41-bb43-1eb5d9902b58\") " pod="calico-system/calico-typha-7547c65dcc-xlzg8" May 17 00:21:41.957548 kubelet[2509]: I0517 00:21:41.957127 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw88r\" (UniqueName: \"kubernetes.io/projected/30ab6d78-e16a-4a41-bb43-1eb5d9902b58-kube-api-access-cw88r\") pod \"calico-typha-7547c65dcc-xlzg8\" (UID: \"30ab6d78-e16a-4a41-bb43-1eb5d9902b58\") " pod="calico-system/calico-typha-7547c65dcc-xlzg8" May 17 00:21:42.193883 systemd[1]: Created slice kubepods-besteffort-pod28f57e79_5b60_4340_b30c_fdb8fa86de0f.slice - libcontainer container kubepods-besteffort-pod28f57e79_5b60_4340_b30c_fdb8fa86de0f.slice. May 17 00:21:42.210376 kubelet[2509]: E0517 00:21:42.210043 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:21:42.211003 containerd[1455]: time="2025-05-17T00:21:42.210685225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7547c65dcc-xlzg8,Uid:30ab6d78-e16a-4a41-bb43-1eb5d9902b58,Namespace:calico-system,Attempt:0,}" May 17 00:21:42.238572 containerd[1455]: time="2025-05-17T00:21:42.237723391Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:21:42.238812 containerd[1455]: time="2025-05-17T00:21:42.238269161Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:21:42.238812 containerd[1455]: time="2025-05-17T00:21:42.238278791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:42.238812 containerd[1455]: time="2025-05-17T00:21:42.238363391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:42.260082 kubelet[2509]: I0517 00:21:42.259837 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/28f57e79-5b60-4340-b30c-fdb8fa86de0f-cni-bin-dir\") pod \"calico-node-fh7nf\" (UID: \"28f57e79-5b60-4340-b30c-fdb8fa86de0f\") " pod="calico-system/calico-node-fh7nf" May 17 00:21:42.260082 kubelet[2509]: I0517 00:21:42.259871 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/28f57e79-5b60-4340-b30c-fdb8fa86de0f-cni-log-dir\") pod \"calico-node-fh7nf\" (UID: \"28f57e79-5b60-4340-b30c-fdb8fa86de0f\") " pod="calico-system/calico-node-fh7nf" May 17 00:21:42.260082 kubelet[2509]: I0517 00:21:42.259886 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/28f57e79-5b60-4340-b30c-fdb8fa86de0f-node-certs\") pod \"calico-node-fh7nf\" (UID: \"28f57e79-5b60-4340-b30c-fdb8fa86de0f\") " pod="calico-system/calico-node-fh7nf" May 17 00:21:42.260082 kubelet[2509]: I0517 00:21:42.259898 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28f57e79-5b60-4340-b30c-fdb8fa86de0f-tigera-ca-bundle\") pod \"calico-node-fh7nf\" (UID: \"28f57e79-5b60-4340-b30c-fdb8fa86de0f\") " pod="calico-system/calico-node-fh7nf" May 17 00:21:42.260082 kubelet[2509]: I0517 00:21:42.259912 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/28f57e79-5b60-4340-b30c-fdb8fa86de0f-cni-net-dir\") pod \"calico-node-fh7nf\" (UID: \"28f57e79-5b60-4340-b30c-fdb8fa86de0f\") " pod="calico-system/calico-node-fh7nf" May 17 00:21:42.260314 kubelet[2509]: I0517 00:21:42.259925 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/28f57e79-5b60-4340-b30c-fdb8fa86de0f-flexvol-driver-host\") pod \"calico-node-fh7nf\" (UID: \"28f57e79-5b60-4340-b30c-fdb8fa86de0f\") " pod="calico-system/calico-node-fh7nf" May 17 00:21:42.260314 kubelet[2509]: I0517 00:21:42.259940 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/28f57e79-5b60-4340-b30c-fdb8fa86de0f-policysync\") pod \"calico-node-fh7nf\" (UID: \"28f57e79-5b60-4340-b30c-fdb8fa86de0f\") " pod="calico-system/calico-node-fh7nf" May 17 00:21:42.260314 kubelet[2509]: I0517 00:21:42.259952 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/28f57e79-5b60-4340-b30c-fdb8fa86de0f-var-lib-calico\") pod \"calico-node-fh7nf\" (UID: \"28f57e79-5b60-4340-b30c-fdb8fa86de0f\") " pod="calico-system/calico-node-fh7nf" May 17 00:21:42.260314 kubelet[2509]: I0517 00:21:42.259964 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/28f57e79-5b60-4340-b30c-fdb8fa86de0f-lib-modules\") pod \"calico-node-fh7nf\" (UID: \"28f57e79-5b60-4340-b30c-fdb8fa86de0f\") " pod="calico-system/calico-node-fh7nf" May 17 00:21:42.260314 kubelet[2509]: I0517 00:21:42.259975 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/28f57e79-5b60-4340-b30c-fdb8fa86de0f-var-run-calico\") pod \"calico-node-fh7nf\" (UID: \"28f57e79-5b60-4340-b30c-fdb8fa86de0f\") " pod="calico-system/calico-node-fh7nf" May 17 00:21:42.260405 kubelet[2509]: I0517 00:21:42.259989 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/28f57e79-5b60-4340-b30c-fdb8fa86de0f-xtables-lock\") pod \"calico-node-fh7nf\" (UID: \"28f57e79-5b60-4340-b30c-fdb8fa86de0f\") " pod="calico-system/calico-node-fh7nf" May 17 00:21:42.260405 kubelet[2509]: I0517 00:21:42.260009 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wskk\" (UniqueName: \"kubernetes.io/projected/28f57e79-5b60-4340-b30c-fdb8fa86de0f-kube-api-access-7wskk\") pod \"calico-node-fh7nf\" (UID: \"28f57e79-5b60-4340-b30c-fdb8fa86de0f\") " pod="calico-system/calico-node-fh7nf" May 17 00:21:42.265300 systemd[1]: Started cri-containerd-33bbad48b3f643a04f2d27fbe58fb8d4dea2def3bb22fa4ed5b551d8a9ea7c56.scope - libcontainer container 33bbad48b3f643a04f2d27fbe58fb8d4dea2def3bb22fa4ed5b551d8a9ea7c56. May 17 00:21:42.299693 containerd[1455]: time="2025-05-17T00:21:42.299526141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7547c65dcc-xlzg8,Uid:30ab6d78-e16a-4a41-bb43-1eb5d9902b58,Namespace:calico-system,Attempt:0,} returns sandbox id \"33bbad48b3f643a04f2d27fbe58fb8d4dea2def3bb22fa4ed5b551d8a9ea7c56\"" May 17 00:21:42.300401 kubelet[2509]: E0517 00:21:42.300048 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:21:42.300866 containerd[1455]: time="2025-05-17T00:21:42.300852040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 17 00:21:42.364575 kubelet[2509]: E0517 00:21:42.364544 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.364575 kubelet[2509]: W0517 00:21:42.364569 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.364810 kubelet[2509]: E0517 00:21:42.364786 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.365420 kubelet[2509]: E0517 00:21:42.365398 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.365820 kubelet[2509]: W0517 00:21:42.365413 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.365820 kubelet[2509]: E0517 00:21:42.365681 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.366609 kubelet[2509]: E0517 00:21:42.366570 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.366609 kubelet[2509]: W0517 00:21:42.366596 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.367084 kubelet[2509]: E0517 00:21:42.367060 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.367541 kubelet[2509]: W0517 00:21:42.367514 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.367619 kubelet[2509]: E0517 00:21:42.367231 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.367646 kubelet[2509]: E0517 00:21:42.367621 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.368281 kubelet[2509]: E0517 00:21:42.368259 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.368281 kubelet[2509]: W0517 00:21:42.368275 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.368548 kubelet[2509]: E0517 00:21:42.368524 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.369403 kubelet[2509]: E0517 00:21:42.369241 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.369403 kubelet[2509]: W0517 00:21:42.369252 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.369403 kubelet[2509]: E0517 00:21:42.369329 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.369669 kubelet[2509]: E0517 00:21:42.369648 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.369669 kubelet[2509]: W0517 00:21:42.369663 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.370167 kubelet[2509]: E0517 00:21:42.370147 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.370437 kubelet[2509]: E0517 00:21:42.370361 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.370437 kubelet[2509]: W0517 00:21:42.370372 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.371112 kubelet[2509]: E0517 00:21:42.371093 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.371296 kubelet[2509]: E0517 00:21:42.371282 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.371414 kubelet[2509]: W0517 00:21:42.371344 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.371444 kubelet[2509]: E0517 00:21:42.371421 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.371723 kubelet[2509]: E0517 00:21:42.371616 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.371723 kubelet[2509]: W0517 00:21:42.371644 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.371805 kubelet[2509]: E0517 00:21:42.371778 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.372162 kubelet[2509]: E0517 00:21:42.371970 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.372162 kubelet[2509]: W0517 00:21:42.371979 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.372162 kubelet[2509]: E0517 00:21:42.372015 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.372498 kubelet[2509]: E0517 00:21:42.372487 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.372606 kubelet[2509]: W0517 00:21:42.372582 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.372776 kubelet[2509]: E0517 00:21:42.372765 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.373055 kubelet[2509]: E0517 00:21:42.373032 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.373055 kubelet[2509]: W0517 00:21:42.373044 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.373055 kubelet[2509]: E0517 00:21:42.373063 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.373654 kubelet[2509]: E0517 00:21:42.373389 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.373654 kubelet[2509]: W0517 00:21:42.373402 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.373654 kubelet[2509]: E0517 00:21:42.373423 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.439389 kubelet[2509]: E0517 00:21:42.439350 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mfhj5" podUID="e793e701-f5aa-4190-a1ec-13776ffa5239" May 17 00:21:42.448863 kubelet[2509]: E0517 00:21:42.448623 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.448863 kubelet[2509]: W0517 00:21:42.448647 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.448863 kubelet[2509]: E0517 00:21:42.448657 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.451107 kubelet[2509]: E0517 00:21:42.451089 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.451107 kubelet[2509]: W0517 00:21:42.451102 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.451204 kubelet[2509]: E0517 00:21:42.451111 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.451865 kubelet[2509]: E0517 00:21:42.451842 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.451865 kubelet[2509]: W0517 00:21:42.451857 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.451865 kubelet[2509]: E0517 00:21:42.451866 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.452428 kubelet[2509]: E0517 00:21:42.452404 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.452428 kubelet[2509]: W0517 00:21:42.452418 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.452428 kubelet[2509]: E0517 00:21:42.452426 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.453143 kubelet[2509]: E0517 00:21:42.453119 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.453143 kubelet[2509]: W0517 00:21:42.453133 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.453143 kubelet[2509]: E0517 00:21:42.453144 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.454369 kubelet[2509]: E0517 00:21:42.453729 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.454369 kubelet[2509]: W0517 00:21:42.453765 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.454369 kubelet[2509]: E0517 00:21:42.453775 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.454369 kubelet[2509]: E0517 00:21:42.454001 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.454369 kubelet[2509]: W0517 00:21:42.454009 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.454369 kubelet[2509]: E0517 00:21:42.454017 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.454369 kubelet[2509]: E0517 00:21:42.454295 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.454369 kubelet[2509]: W0517 00:21:42.454363 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.454369 kubelet[2509]: E0517 00:21:42.454374 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.455360 kubelet[2509]: E0517 00:21:42.454594 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.455360 kubelet[2509]: W0517 00:21:42.454607 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.455360 kubelet[2509]: E0517 00:21:42.454633 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.455360 kubelet[2509]: E0517 00:21:42.454825 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.455360 kubelet[2509]: W0517 00:21:42.454831 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.455360 kubelet[2509]: E0517 00:21:42.454838 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.455360 kubelet[2509]: E0517 00:21:42.455040 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.455360 kubelet[2509]: W0517 00:21:42.455046 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.455360 kubelet[2509]: E0517 00:21:42.455054 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.455529 kubelet[2509]: E0517 00:21:42.455400 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.455529 kubelet[2509]: W0517 00:21:42.455407 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.455529 kubelet[2509]: E0517 00:21:42.455440 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.456131 kubelet[2509]: E0517 00:21:42.455661 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.456131 kubelet[2509]: W0517 00:21:42.455695 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.456131 kubelet[2509]: E0517 00:21:42.455702 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.456131 kubelet[2509]: E0517 00:21:42.455904 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.456131 kubelet[2509]: W0517 00:21:42.455910 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.456131 kubelet[2509]: E0517 00:21:42.455944 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.456271 kubelet[2509]: E0517 00:21:42.456155 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.456271 kubelet[2509]: W0517 00:21:42.456162 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.456271 kubelet[2509]: E0517 00:21:42.456226 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.457404 kubelet[2509]: E0517 00:21:42.456457 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.457404 kubelet[2509]: W0517 00:21:42.456467 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.457404 kubelet[2509]: E0517 00:21:42.456476 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.457404 kubelet[2509]: E0517 00:21:42.456677 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.457404 kubelet[2509]: W0517 00:21:42.456763 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.457404 kubelet[2509]: E0517 00:21:42.456782 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.457404 kubelet[2509]: E0517 00:21:42.457100 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.457404 kubelet[2509]: W0517 00:21:42.457107 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.457404 kubelet[2509]: E0517 00:21:42.457114 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.457404 kubelet[2509]: E0517 00:21:42.457356 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.457611 kubelet[2509]: W0517 00:21:42.457363 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.457611 kubelet[2509]: E0517 00:21:42.457370 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.457611 kubelet[2509]: E0517 00:21:42.457595 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.457611 kubelet[2509]: W0517 00:21:42.457602 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.457611 kubelet[2509]: E0517 00:21:42.457608 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.464670 kubelet[2509]: E0517 00:21:42.464546 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.464670 kubelet[2509]: W0517 00:21:42.464558 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.464670 kubelet[2509]: E0517 00:21:42.464567 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.464670 kubelet[2509]: I0517 00:21:42.464614 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e793e701-f5aa-4190-a1ec-13776ffa5239-registration-dir\") pod \"csi-node-driver-mfhj5\" (UID: \"e793e701-f5aa-4190-a1ec-13776ffa5239\") " pod="calico-system/csi-node-driver-mfhj5" May 17 00:21:42.464877 kubelet[2509]: E0517 00:21:42.464861 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.464877 kubelet[2509]: W0517 00:21:42.464874 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.464915 kubelet[2509]: E0517 00:21:42.464889 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.465213 kubelet[2509]: I0517 00:21:42.465063 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e793e701-f5aa-4190-a1ec-13776ffa5239-kubelet-dir\") pod \"csi-node-driver-mfhj5\" (UID: \"e793e701-f5aa-4190-a1ec-13776ffa5239\") " pod="calico-system/csi-node-driver-mfhj5" May 17 00:21:42.465213 kubelet[2509]: E0517 00:21:42.465127 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.465213 kubelet[2509]: W0517 00:21:42.465135 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.465213 kubelet[2509]: E0517 00:21:42.465175 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.465747 kubelet[2509]: E0517 00:21:42.465725 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.465747 kubelet[2509]: W0517 00:21:42.465739 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.465922 kubelet[2509]: E0517 00:21:42.465777 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.466015 kubelet[2509]: E0517 00:21:42.466001 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.466015 kubelet[2509]: W0517 00:21:42.466011 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.466056 kubelet[2509]: E0517 00:21:42.466025 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.466274 kubelet[2509]: I0517 00:21:42.466106 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c76k\" (UniqueName: \"kubernetes.io/projected/e793e701-f5aa-4190-a1ec-13776ffa5239-kube-api-access-4c76k\") pod \"csi-node-driver-mfhj5\" (UID: \"e793e701-f5aa-4190-a1ec-13776ffa5239\") " pod="calico-system/csi-node-driver-mfhj5" May 17 00:21:42.466565 kubelet[2509]: E0517 00:21:42.466549 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.466764 kubelet[2509]: W0517 00:21:42.466750 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.467067 kubelet[2509]: E0517 00:21:42.466828 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.467317 kubelet[2509]: E0517 00:21:42.467306 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.467503 kubelet[2509]: W0517 00:21:42.467455 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.467503 kubelet[2509]: E0517 00:21:42.467482 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.467736 kubelet[2509]: E0517 00:21:42.467711 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.467736 kubelet[2509]: W0517 00:21:42.467728 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.467880 kubelet[2509]: E0517 00:21:42.467740 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.467880 kubelet[2509]: I0517 00:21:42.467754 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e793e701-f5aa-4190-a1ec-13776ffa5239-socket-dir\") pod \"csi-node-driver-mfhj5\" (UID: \"e793e701-f5aa-4190-a1ec-13776ffa5239\") " pod="calico-system/csi-node-driver-mfhj5" May 17 00:21:42.468030 kubelet[2509]: E0517 00:21:42.467999 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.468030 kubelet[2509]: W0517 00:21:42.468012 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.468164 kubelet[2509]: E0517 00:21:42.468083 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.468164 kubelet[2509]: I0517 00:21:42.468102 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e793e701-f5aa-4190-a1ec-13776ffa5239-varrun\") pod \"csi-node-driver-mfhj5\" (UID: \"e793e701-f5aa-4190-a1ec-13776ffa5239\") " pod="calico-system/csi-node-driver-mfhj5" May 17 00:21:42.468390 kubelet[2509]: E0517 00:21:42.468298 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.468390 kubelet[2509]: W0517 00:21:42.468309 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.468390 kubelet[2509]: E0517 00:21:42.468350 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.468696 kubelet[2509]: E0517 00:21:42.468591 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.468696 kubelet[2509]: W0517 00:21:42.468602 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.468696 kubelet[2509]: E0517 00:21:42.468634 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.469017 kubelet[2509]: E0517 00:21:42.468810 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.469017 kubelet[2509]: W0517 00:21:42.468820 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.469017 kubelet[2509]: E0517 00:21:42.468831 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.469017 kubelet[2509]: E0517 00:21:42.469018 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.469089 kubelet[2509]: W0517 00:21:42.469024 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.469089 kubelet[2509]: E0517 00:21:42.469032 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.469303 kubelet[2509]: E0517 00:21:42.469281 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.469303 kubelet[2509]: W0517 00:21:42.469294 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.469303 kubelet[2509]: E0517 00:21:42.469301 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.469535 kubelet[2509]: E0517 00:21:42.469520 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.469535 kubelet[2509]: W0517 00:21:42.469531 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.469582 kubelet[2509]: E0517 00:21:42.469538 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.497089 containerd[1455]: time="2025-05-17T00:21:42.497057352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fh7nf,Uid:28f57e79-5b60-4340-b30c-fdb8fa86de0f,Namespace:calico-system,Attempt:0,}" May 17 00:21:42.517244 containerd[1455]: time="2025-05-17T00:21:42.517118562Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:21:42.518198 containerd[1455]: time="2025-05-17T00:21:42.517389672Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:21:42.518198 containerd[1455]: time="2025-05-17T00:21:42.517406302Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:42.518374 containerd[1455]: time="2025-05-17T00:21:42.517710171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:42.543344 systemd[1]: Started cri-containerd-19f19d01ea16de87432089d7a11a22a53924fa385fc486d1335251004241d65b.scope - libcontainer container 19f19d01ea16de87432089d7a11a22a53924fa385fc486d1335251004241d65b. May 17 00:21:42.568577 containerd[1455]: time="2025-05-17T00:21:42.568523166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fh7nf,Uid:28f57e79-5b60-4340-b30c-fdb8fa86de0f,Namespace:calico-system,Attempt:0,} returns sandbox id \"19f19d01ea16de87432089d7a11a22a53924fa385fc486d1335251004241d65b\"" May 17 00:21:42.568890 kubelet[2509]: E0517 00:21:42.568730 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.568890 kubelet[2509]: W0517 00:21:42.568744 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.568890 kubelet[2509]: E0517 00:21:42.568759 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.569735 kubelet[2509]: E0517 00:21:42.569614 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.569735 kubelet[2509]: W0517 00:21:42.569623 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.569735 kubelet[2509]: E0517 00:21:42.569671 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.570248 kubelet[2509]: E0517 00:21:42.570084 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.570248 kubelet[2509]: W0517 00:21:42.570093 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.570248 kubelet[2509]: E0517 00:21:42.570108 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.570550 kubelet[2509]: E0517 00:21:42.570541 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.570840 kubelet[2509]: W0517 00:21:42.570679 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.570912 kubelet[2509]: E0517 00:21:42.570901 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.571227 kubelet[2509]: E0517 00:21:42.571216 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.571397 kubelet[2509]: W0517 00:21:42.571293 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.571397 kubelet[2509]: E0517 00:21:42.571305 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.571898 kubelet[2509]: E0517 00:21:42.571801 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.571898 kubelet[2509]: W0517 00:21:42.571810 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.571898 kubelet[2509]: E0517 00:21:42.571865 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.572445 kubelet[2509]: E0517 00:21:42.572429 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.572657 kubelet[2509]: W0517 00:21:42.572542 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.572693 kubelet[2509]: E0517 00:21:42.572654 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.573415 kubelet[2509]: E0517 00:21:42.573382 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.573569 kubelet[2509]: W0517 00:21:42.573474 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.573569 kubelet[2509]: E0517 00:21:42.573523 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.574335 kubelet[2509]: E0517 00:21:42.574297 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.574335 kubelet[2509]: W0517 00:21:42.574319 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.574492 kubelet[2509]: E0517 00:21:42.574447 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.574823 kubelet[2509]: E0517 00:21:42.574676 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.574823 kubelet[2509]: W0517 00:21:42.574808 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.575143 kubelet[2509]: E0517 00:21:42.575116 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.575547 kubelet[2509]: E0517 00:21:42.575532 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.575547 kubelet[2509]: W0517 00:21:42.575544 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.575927 kubelet[2509]: E0517 00:21:42.575912 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.577016 kubelet[2509]: E0517 00:21:42.576995 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.577016 kubelet[2509]: W0517 00:21:42.577008 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.577740 kubelet[2509]: E0517 00:21:42.577716 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.577810 kubelet[2509]: E0517 00:21:42.577777 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.577810 kubelet[2509]: W0517 00:21:42.577788 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.578202 kubelet[2509]: E0517 00:21:42.578136 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.578500 kubelet[2509]: E0517 00:21:42.578476 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.578500 kubelet[2509]: W0517 00:21:42.578489 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.578982 kubelet[2509]: E0517 00:21:42.578965 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.579455 kubelet[2509]: E0517 00:21:42.579434 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.579455 kubelet[2509]: W0517 00:21:42.579448 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.579817 kubelet[2509]: E0517 00:21:42.579784 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.580262 kubelet[2509]: E0517 00:21:42.580238 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.580262 kubelet[2509]: W0517 00:21:42.580256 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.581248 kubelet[2509]: E0517 00:21:42.581222 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.581486 kubelet[2509]: E0517 00:21:42.581460 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.581486 kubelet[2509]: W0517 00:21:42.581473 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.581542 kubelet[2509]: E0517 00:21:42.581530 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.581736 kubelet[2509]: E0517 00:21:42.581706 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.581736 kubelet[2509]: W0517 00:21:42.581717 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.581813 kubelet[2509]: E0517 00:21:42.581800 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.583431 kubelet[2509]: E0517 00:21:42.583404 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.583431 kubelet[2509]: W0517 00:21:42.583420 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.583602 kubelet[2509]: E0517 00:21:42.583514 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.583655 kubelet[2509]: E0517 00:21:42.583639 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.583655 kubelet[2509]: W0517 00:21:42.583651 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.584021 kubelet[2509]: E0517 00:21:42.583992 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.584308 kubelet[2509]: E0517 00:21:42.584288 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.584436 kubelet[2509]: W0517 00:21:42.584417 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.584902 kubelet[2509]: E0517 00:21:42.584871 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.585121 kubelet[2509]: E0517 00:21:42.585099 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.585121 kubelet[2509]: W0517 00:21:42.585112 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.585585 kubelet[2509]: E0517 00:21:42.585569 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.586586 kubelet[2509]: E0517 00:21:42.586248 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.586586 kubelet[2509]: W0517 00:21:42.586261 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.586586 kubelet[2509]: E0517 00:21:42.586304 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.587202 kubelet[2509]: E0517 00:21:42.587163 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.587243 kubelet[2509]: W0517 00:21:42.587211 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.587243 kubelet[2509]: E0517 00:21:42.587228 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.587476 kubelet[2509]: E0517 00:21:42.587457 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.587476 kubelet[2509]: W0517 00:21:42.587470 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.587521 kubelet[2509]: E0517 00:21:42.587479 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:42.606546 kubelet[2509]: E0517 00:21:42.606511 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:42.606546 kubelet[2509]: W0517 00:21:42.606528 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:42.606611 kubelet[2509]: E0517 00:21:42.606554 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:43.170580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount493356749.mount: Deactivated successfully. May 17 00:21:43.615214 containerd[1455]: time="2025-05-17T00:21:43.614690593Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:43.615810 containerd[1455]: time="2025-05-17T00:21:43.615771122Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=35158669" May 17 00:21:43.616144 containerd[1455]: time="2025-05-17T00:21:43.616123482Z" level=info msg="ImageCreate event name:\"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:43.617836 containerd[1455]: time="2025-05-17T00:21:43.617809511Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:43.618354 containerd[1455]: time="2025-05-17T00:21:43.618331911Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"35158523\" in 1.317327461s" May 17 00:21:43.618416 containerd[1455]: time="2025-05-17T00:21:43.618401301Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\"" May 17 00:21:43.619088 containerd[1455]: time="2025-05-17T00:21:43.619067301Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 17 00:21:43.631374 containerd[1455]: time="2025-05-17T00:21:43.631348974Z" level=info msg="CreateContainer within sandbox \"33bbad48b3f643a04f2d27fbe58fb8d4dea2def3bb22fa4ed5b551d8a9ea7c56\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 17 00:21:43.638259 containerd[1455]: time="2025-05-17T00:21:43.638232231Z" level=info msg="CreateContainer within sandbox \"33bbad48b3f643a04f2d27fbe58fb8d4dea2def3bb22fa4ed5b551d8a9ea7c56\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ba4ed24bf7e532b1740afa22ed69ee85173639eb10b322044975b841d231f598\"" May 17 00:21:43.639195 containerd[1455]: time="2025-05-17T00:21:43.638641891Z" level=info msg="StartContainer for \"ba4ed24bf7e532b1740afa22ed69ee85173639eb10b322044975b841d231f598\"" May 17 00:21:43.667366 systemd[1]: Started cri-containerd-ba4ed24bf7e532b1740afa22ed69ee85173639eb10b322044975b841d231f598.scope - libcontainer container ba4ed24bf7e532b1740afa22ed69ee85173639eb10b322044975b841d231f598. May 17 00:21:43.707293 containerd[1455]: time="2025-05-17T00:21:43.707232407Z" level=info msg="StartContainer for \"ba4ed24bf7e532b1740afa22ed69ee85173639eb10b322044975b841d231f598\" returns successfully" May 17 00:21:43.841304 kubelet[2509]: E0517 00:21:43.840672 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mfhj5" podUID="e793e701-f5aa-4190-a1ec-13776ffa5239" May 17 00:21:43.901864 kubelet[2509]: E0517 00:21:43.901844 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:21:43.909730 kubelet[2509]: I0517 00:21:43.909678 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7547c65dcc-xlzg8" podStartSLOduration=1.5913658339999999 podStartE2EDuration="2.909669885s" podCreationTimestamp="2025-05-17 00:21:41 +0000 UTC" firstStartedPulling="2025-05-17 00:21:42.30067976 +0000 UTC m=+19.574306802" lastFinishedPulling="2025-05-17 00:21:43.618983801 +0000 UTC m=+20.892610853" observedRunningTime="2025-05-17 00:21:43.908920116 +0000 UTC m=+21.182547158" watchObservedRunningTime="2025-05-17 00:21:43.909669885 +0000 UTC m=+21.183296927" May 17 00:21:43.970381 kubelet[2509]: E0517 00:21:43.970363 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:43.970381 kubelet[2509]: W0517 00:21:43.970378 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:43.970447 kubelet[2509]: E0517 00:21:43.970392 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:43.970667 kubelet[2509]: E0517 00:21:43.970627 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:43.970667 kubelet[2509]: W0517 00:21:43.970649 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:43.970667 kubelet[2509]: E0517 00:21:43.970675 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:43.970897 kubelet[2509]: E0517 00:21:43.970874 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:43.970897 kubelet[2509]: W0517 00:21:43.970884 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:43.970897 kubelet[2509]: E0517 00:21:43.970892 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:43.971134 kubelet[2509]: E0517 00:21:43.971121 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:43.971134 kubelet[2509]: W0517 00:21:43.971131 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:43.971197 kubelet[2509]: E0517 00:21:43.971138 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:43.971398 kubelet[2509]: E0517 00:21:43.971383 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:43.971398 kubelet[2509]: W0517 00:21:43.971394 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:43.971439 kubelet[2509]: E0517 00:21:43.971401 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:43.971604 kubelet[2509]: E0517 00:21:43.971587 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:43.971604 kubelet[2509]: W0517 00:21:43.971600 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:43.971669 kubelet[2509]: E0517 00:21:43.971610 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:43.971809 kubelet[2509]: E0517 00:21:43.971793 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:43.971809 kubelet[2509]: W0517 00:21:43.971804 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:43.971876 kubelet[2509]: E0517 00:21:43.971813 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:43.972021 kubelet[2509]: E0517 00:21:43.972005 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:43.972021 kubelet[2509]: W0517 00:21:43.972017 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:43.972082 kubelet[2509]: E0517 00:21:43.972026 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:43.972252 kubelet[2509]: E0517 00:21:43.972237 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:43.972252 kubelet[2509]: W0517 00:21:43.972248 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:43.972304 kubelet[2509]: E0517 00:21:43.972258 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:43.972490 kubelet[2509]: E0517 00:21:43.972461 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:43.972490 kubelet[2509]: W0517 00:21:43.972475 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:43.972490 kubelet[2509]: E0517 00:21:43.972485 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:43.972733 kubelet[2509]: E0517 00:21:43.972715 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:43.972733 kubelet[2509]: W0517 00:21:43.972728 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:43.972797 kubelet[2509]: E0517 00:21:43.972738 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:43.972964 kubelet[2509]: E0517 00:21:43.972950 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:43.972964 kubelet[2509]: W0517 00:21:43.972961 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:43.973047 kubelet[2509]: E0517 00:21:43.972969 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:43.973164 kubelet[2509]: E0517 00:21:43.973150 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:43.973164 kubelet[2509]: W0517 00:21:43.973161 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:43.973228 kubelet[2509]: E0517 00:21:43.973168 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:43.973427 kubelet[2509]: E0517 00:21:43.973409 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:43.973470 kubelet[2509]: W0517 00:21:43.973422 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:43.973470 kubelet[2509]: E0517 00:21:43.973451 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:43.973694 kubelet[2509]: E0517 00:21:43.973678 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:43.973694 kubelet[2509]: W0517 00:21:43.973691 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:43.973742 kubelet[2509]: E0517 00:21:43.973700 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:43.985976 kubelet[2509]: E0517 00:21:43.985961 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:43.985976 kubelet[2509]: W0517 00:21:43.985973 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:43.986038 kubelet[2509]: E0517 00:21:43.985982 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:43.986278 kubelet[2509]: E0517 00:21:43.986259 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:43.986313 kubelet[2509]: W0517 00:21:43.986281 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:43.986313 kubelet[2509]: E0517 00:21:43.986297 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:43.986620 kubelet[2509]: E0517 00:21:43.986522 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:43.986620 kubelet[2509]: W0517 00:21:43.986532 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:43.986620 kubelet[2509]: E0517 00:21:43.986543 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:43.986784 kubelet[2509]: E0517 00:21:43.986767 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:43.986784 kubelet[2509]: W0517 00:21:43.986779 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:43.986876 kubelet[2509]: E0517 00:21:43.986805 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:43.987029 kubelet[2509]: E0517 00:21:43.987017 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:43.987029 kubelet[2509]: W0517 00:21:43.987027 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:43.987119 kubelet[2509]: E0517 00:21:43.987046 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:43.987269 kubelet[2509]: E0517 00:21:43.987230 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:43.987269 kubelet[2509]: W0517 00:21:43.987240 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:43.987269 kubelet[2509]: E0517 00:21:43.987253 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:43.987759 kubelet[2509]: E0517 00:21:43.987418 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:43.987759 kubelet[2509]: W0517 00:21:43.987424 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:43.987759 kubelet[2509]: E0517 00:21:43.987430 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:43.987931 kubelet[2509]: E0517 00:21:43.987787 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:43.987931 kubelet[2509]: W0517 00:21:43.987794 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:43.987931 kubelet[2509]: E0517 00:21:43.987801 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:43.988009 kubelet[2509]: E0517 00:21:43.987984 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:43.988009 kubelet[2509]: W0517 00:21:43.987996 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:43.988009 kubelet[2509]: E0517 00:21:43.988008 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:43.988244 kubelet[2509]: E0517 00:21:43.988231 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:43.988364 kubelet[2509]: W0517 00:21:43.988335 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:43.988455 kubelet[2509]: E0517 00:21:43.988433 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:43.988731 kubelet[2509]: E0517 00:21:43.988689 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:43.988731 kubelet[2509]: W0517 00:21:43.988701 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:43.988731 kubelet[2509]: E0517 00:21:43.988713 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:43.988986 kubelet[2509]: E0517 00:21:43.988929 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:43.988986 kubelet[2509]: W0517 00:21:43.988936 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:43.988986 kubelet[2509]: E0517 00:21:43.988954 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:43.989127 kubelet[2509]: E0517 00:21:43.989114 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:43.989127 kubelet[2509]: W0517 00:21:43.989124 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:43.989255 kubelet[2509]: E0517 00:21:43.989139 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:43.989482 kubelet[2509]: E0517 00:21:43.989339 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:43.989482 kubelet[2509]: W0517 00:21:43.989350 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:43.989482 kubelet[2509]: E0517 00:21:43.989369 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:43.989704 kubelet[2509]: E0517 00:21:43.989691 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:43.989704 kubelet[2509]: W0517 00:21:43.989702 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:43.989793 kubelet[2509]: E0517 00:21:43.989780 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:43.990084 kubelet[2509]: E0517 00:21:43.989899 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:43.990084 kubelet[2509]: W0517 00:21:43.989910 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:43.990084 kubelet[2509]: E0517 00:21:43.989917 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:43.990084 kubelet[2509]: E0517 00:21:43.990079 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:43.990084 kubelet[2509]: W0517 00:21:43.990085 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:43.990231 kubelet[2509]: E0517 00:21:43.990092 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:43.990523 kubelet[2509]: E0517 00:21:43.990509 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:21:43.990563 kubelet[2509]: W0517 00:21:43.990549 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:21:43.990563 kubelet[2509]: E0517 00:21:43.990562 2509 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:21:44.194272 containerd[1455]: time="2025-05-17T00:21:44.194163023Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:44.195009 containerd[1455]: time="2025-05-17T00:21:44.194980133Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=4441619" May 17 00:21:44.195759 containerd[1455]: time="2025-05-17T00:21:44.195743272Z" level=info msg="ImageCreate event name:\"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:44.198265 containerd[1455]: time="2025-05-17T00:21:44.197916711Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:44.198570 containerd[1455]: time="2025-05-17T00:21:44.198550391Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5934282\" in 579.45789ms" May 17 00:21:44.198623 containerd[1455]: time="2025-05-17T00:21:44.198611381Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\"" May 17 00:21:44.200596 containerd[1455]: time="2025-05-17T00:21:44.200570470Z" level=info msg="CreateContainer within sandbox \"19f19d01ea16de87432089d7a11a22a53924fa385fc486d1335251004241d65b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 17 00:21:44.216350 containerd[1455]: time="2025-05-17T00:21:44.216313822Z" level=info msg="CreateContainer within sandbox \"19f19d01ea16de87432089d7a11a22a53924fa385fc486d1335251004241d65b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"dd369830423090cc641e1daaf136fe605ce0afe73c69513f7219fcb4ef79ebcd\"" May 17 00:21:44.216753 containerd[1455]: time="2025-05-17T00:21:44.216700752Z" level=info msg="StartContainer for \"dd369830423090cc641e1daaf136fe605ce0afe73c69513f7219fcb4ef79ebcd\"" May 17 00:21:44.252311 systemd[1]: Started cri-containerd-dd369830423090cc641e1daaf136fe605ce0afe73c69513f7219fcb4ef79ebcd.scope - libcontainer container dd369830423090cc641e1daaf136fe605ce0afe73c69513f7219fcb4ef79ebcd. May 17 00:21:44.280773 containerd[1455]: time="2025-05-17T00:21:44.280694170Z" level=info msg="StartContainer for \"dd369830423090cc641e1daaf136fe605ce0afe73c69513f7219fcb4ef79ebcd\" returns successfully" May 17 00:21:44.297330 systemd[1]: cri-containerd-dd369830423090cc641e1daaf136fe605ce0afe73c69513f7219fcb4ef79ebcd.scope: Deactivated successfully. May 17 00:21:44.318141 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd369830423090cc641e1daaf136fe605ce0afe73c69513f7219fcb4ef79ebcd-rootfs.mount: Deactivated successfully. May 17 00:21:44.343237 containerd[1455]: time="2025-05-17T00:21:44.343119239Z" level=info msg="shim disconnected" id=dd369830423090cc641e1daaf136fe605ce0afe73c69513f7219fcb4ef79ebcd namespace=k8s.io May 17 00:21:44.343237 containerd[1455]: time="2025-05-17T00:21:44.343165168Z" level=warning msg="cleaning up after shim disconnected" id=dd369830423090cc641e1daaf136fe605ce0afe73c69513f7219fcb4ef79ebcd namespace=k8s.io May 17 00:21:44.343511 containerd[1455]: time="2025-05-17T00:21:44.343173318Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:21:44.904164 kubelet[2509]: I0517 00:21:44.904132 2509 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:21:44.905567 kubelet[2509]: E0517 00:21:44.905102 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:21:44.905920 containerd[1455]: time="2025-05-17T00:21:44.905897257Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 17 00:21:45.840245 kubelet[2509]: E0517 00:21:45.840202 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mfhj5" podUID="e793e701-f5aa-4190-a1ec-13776ffa5239" May 17 00:21:46.321172 containerd[1455]: time="2025-05-17T00:21:46.321131549Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:46.321938 containerd[1455]: time="2025-05-17T00:21:46.321766759Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=70300568" May 17 00:21:46.322541 containerd[1455]: time="2025-05-17T00:21:46.322325109Z" level=info msg="ImageCreate event name:\"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:46.323962 containerd[1455]: time="2025-05-17T00:21:46.323932998Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:46.324760 containerd[1455]: time="2025-05-17T00:21:46.324732887Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"71793271\" in 1.41812351s" May 17 00:21:46.324760 containerd[1455]: time="2025-05-17T00:21:46.324757767Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\"" May 17 00:21:46.327103 containerd[1455]: time="2025-05-17T00:21:46.327074266Z" level=info msg="CreateContainer within sandbox \"19f19d01ea16de87432089d7a11a22a53924fa385fc486d1335251004241d65b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 17 00:21:46.340916 containerd[1455]: time="2025-05-17T00:21:46.340847109Z" level=info msg="CreateContainer within sandbox \"19f19d01ea16de87432089d7a11a22a53924fa385fc486d1335251004241d65b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"929cdc3e0d9469f9eec1110e94e35679928d9812c72b48d0da2e3e0b8e4c9d49\"" May 17 00:21:46.342397 containerd[1455]: time="2025-05-17T00:21:46.341226469Z" level=info msg="StartContainer for \"929cdc3e0d9469f9eec1110e94e35679928d9812c72b48d0da2e3e0b8e4c9d49\"" May 17 00:21:46.372294 systemd[1]: Started cri-containerd-929cdc3e0d9469f9eec1110e94e35679928d9812c72b48d0da2e3e0b8e4c9d49.scope - libcontainer container 929cdc3e0d9469f9eec1110e94e35679928d9812c72b48d0da2e3e0b8e4c9d49. May 17 00:21:46.397628 containerd[1455]: time="2025-05-17T00:21:46.397595331Z" level=info msg="StartContainer for \"929cdc3e0d9469f9eec1110e94e35679928d9812c72b48d0da2e3e0b8e4c9d49\" returns successfully" May 17 00:21:46.909895 systemd[1]: cri-containerd-929cdc3e0d9469f9eec1110e94e35679928d9812c72b48d0da2e3e0b8e4c9d49.scope: Deactivated successfully. May 17 00:21:46.936650 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-929cdc3e0d9469f9eec1110e94e35679928d9812c72b48d0da2e3e0b8e4c9d49-rootfs.mount: Deactivated successfully. May 17 00:21:46.971386 containerd[1455]: time="2025-05-17T00:21:46.971328824Z" level=info msg="shim disconnected" id=929cdc3e0d9469f9eec1110e94e35679928d9812c72b48d0da2e3e0b8e4c9d49 namespace=k8s.io May 17 00:21:46.971768 containerd[1455]: time="2025-05-17T00:21:46.971479874Z" level=warning msg="cleaning up after shim disconnected" id=929cdc3e0d9469f9eec1110e94e35679928d9812c72b48d0da2e3e0b8e4c9d49 namespace=k8s.io May 17 00:21:46.971768 containerd[1455]: time="2025-05-17T00:21:46.971491844Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:21:46.976794 kubelet[2509]: I0517 00:21:46.976218 2509 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 17 00:21:46.994026 containerd[1455]: time="2025-05-17T00:21:46.993963563Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:21:46Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 17 00:21:47.010860 systemd[1]: Created slice kubepods-besteffort-pod38c66a96_d94f_40c8_93ac_4e0202591244.slice - libcontainer container kubepods-besteffort-pod38c66a96_d94f_40c8_93ac_4e0202591244.slice. May 17 00:21:47.018987 systemd[1]: Created slice kubepods-burstable-pod1ddd81ac_9fd2_4e37_83a9_b3a3b4011761.slice - libcontainer container kubepods-burstable-pod1ddd81ac_9fd2_4e37_83a9_b3a3b4011761.slice. May 17 00:21:47.033240 systemd[1]: Created slice kubepods-burstable-podeab83464_1af3_4982_95f5_5c46d047a7e6.slice - libcontainer container kubepods-burstable-podeab83464_1af3_4982_95f5_5c46d047a7e6.slice. May 17 00:21:47.041213 systemd[1]: Created slice kubepods-besteffort-pod66e07ccd_2dbe_42fe_bd10_e349fb811eb6.slice - libcontainer container kubepods-besteffort-pod66e07ccd_2dbe_42fe_bd10_e349fb811eb6.slice. May 17 00:21:47.050544 systemd[1]: Created slice kubepods-besteffort-pod4f18c687_4cb5_49f2_9647_374af2e4bff4.slice - libcontainer container kubepods-besteffort-pod4f18c687_4cb5_49f2_9647_374af2e4bff4.slice. May 17 00:21:47.057049 systemd[1]: Created slice kubepods-besteffort-podecdb2099_9206_4f56_bd2f_4d5b7338559a.slice - libcontainer container kubepods-besteffort-podecdb2099_9206_4f56_bd2f_4d5b7338559a.slice. May 17 00:21:47.063962 systemd[1]: Created slice kubepods-besteffort-pod708b28d1_b868_4f8b_b7c9_b5fa6b493a92.slice - libcontainer container kubepods-besteffort-pod708b28d1_b868_4f8b_b7c9_b5fa6b493a92.slice. May 17 00:21:47.107902 kubelet[2509]: I0517 00:21:47.107867 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thcbp\" (UniqueName: \"kubernetes.io/projected/708b28d1-b868-4f8b-b7c9-b5fa6b493a92-kube-api-access-thcbp\") pod \"calico-apiserver-77f86bc66d-9ctqn\" (UID: \"708b28d1-b868-4f8b-b7c9-b5fa6b493a92\") " pod="calico-apiserver/calico-apiserver-77f86bc66d-9ctqn" May 17 00:21:47.107967 kubelet[2509]: I0517 00:21:47.107905 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5l9k\" (UniqueName: \"kubernetes.io/projected/ecdb2099-9206-4f56-bd2f-4d5b7338559a-kube-api-access-z5l9k\") pod \"calico-apiserver-77f86bc66d-bzk7k\" (UID: \"ecdb2099-9206-4f56-bd2f-4d5b7338559a\") " pod="calico-apiserver/calico-apiserver-77f86bc66d-bzk7k" May 17 00:21:47.107967 kubelet[2509]: I0517 00:21:47.107923 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eab83464-1af3-4982-95f5-5c46d047a7e6-config-volume\") pod \"coredns-668d6bf9bc-g4khv\" (UID: \"eab83464-1af3-4982-95f5-5c46d047a7e6\") " pod="kube-system/coredns-668d6bf9bc-g4khv" May 17 00:21:47.107967 kubelet[2509]: I0517 00:21:47.107937 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9jrr\" (UniqueName: \"kubernetes.io/projected/eab83464-1af3-4982-95f5-5c46d047a7e6-kube-api-access-j9jrr\") pod \"coredns-668d6bf9bc-g4khv\" (UID: \"eab83464-1af3-4982-95f5-5c46d047a7e6\") " pod="kube-system/coredns-668d6bf9bc-g4khv" May 17 00:21:47.107967 kubelet[2509]: I0517 00:21:47.107954 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqgwd\" (UniqueName: \"kubernetes.io/projected/1ddd81ac-9fd2-4e37-83a9-b3a3b4011761-kube-api-access-cqgwd\") pod \"coredns-668d6bf9bc-q7w6q\" (UID: \"1ddd81ac-9fd2-4e37-83a9-b3a3b4011761\") " pod="kube-system/coredns-668d6bf9bc-q7w6q" May 17 00:21:47.107967 kubelet[2509]: I0517 00:21:47.107966 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38c66a96-d94f-40c8-93ac-4e0202591244-whisker-ca-bundle\") pod \"whisker-77988f4665-6r7kc\" (UID: \"38c66a96-d94f-40c8-93ac-4e0202591244\") " pod="calico-system/whisker-77988f4665-6r7kc" May 17 00:21:47.108065 kubelet[2509]: I0517 00:21:47.107981 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/38c66a96-d94f-40c8-93ac-4e0202591244-whisker-backend-key-pair\") pod \"whisker-77988f4665-6r7kc\" (UID: \"38c66a96-d94f-40c8-93ac-4e0202591244\") " pod="calico-system/whisker-77988f4665-6r7kc" May 17 00:21:47.108065 kubelet[2509]: I0517 00:21:47.107993 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f18c687-4cb5-49f2-9647-374af2e4bff4-goldmane-ca-bundle\") pod \"goldmane-78d55f7ddc-htsn7\" (UID: \"4f18c687-4cb5-49f2-9647-374af2e4bff4\") " pod="calico-system/goldmane-78d55f7ddc-htsn7" May 17 00:21:47.108065 kubelet[2509]: I0517 00:21:47.108007 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rqcx\" (UniqueName: \"kubernetes.io/projected/4f18c687-4cb5-49f2-9647-374af2e4bff4-kube-api-access-7rqcx\") pod \"goldmane-78d55f7ddc-htsn7\" (UID: \"4f18c687-4cb5-49f2-9647-374af2e4bff4\") " pod="calico-system/goldmane-78d55f7ddc-htsn7" May 17 00:21:47.108065 kubelet[2509]: I0517 00:21:47.108030 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1ddd81ac-9fd2-4e37-83a9-b3a3b4011761-config-volume\") pod \"coredns-668d6bf9bc-q7w6q\" (UID: \"1ddd81ac-9fd2-4e37-83a9-b3a3b4011761\") " pod="kube-system/coredns-668d6bf9bc-q7w6q" May 17 00:21:47.108065 kubelet[2509]: I0517 00:21:47.108045 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djqvq\" (UniqueName: \"kubernetes.io/projected/66e07ccd-2dbe-42fe-bd10-e349fb811eb6-kube-api-access-djqvq\") pod \"calico-kube-controllers-7998fc854-4sfsk\" (UID: \"66e07ccd-2dbe-42fe-bd10-e349fb811eb6\") " pod="calico-system/calico-kube-controllers-7998fc854-4sfsk" May 17 00:21:47.108155 kubelet[2509]: I0517 00:21:47.108058 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ecdb2099-9206-4f56-bd2f-4d5b7338559a-calico-apiserver-certs\") pod \"calico-apiserver-77f86bc66d-bzk7k\" (UID: \"ecdb2099-9206-4f56-bd2f-4d5b7338559a\") " pod="calico-apiserver/calico-apiserver-77f86bc66d-bzk7k" May 17 00:21:47.108155 kubelet[2509]: I0517 00:21:47.108076 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxvdp\" (UniqueName: \"kubernetes.io/projected/38c66a96-d94f-40c8-93ac-4e0202591244-kube-api-access-wxvdp\") pod \"whisker-77988f4665-6r7kc\" (UID: \"38c66a96-d94f-40c8-93ac-4e0202591244\") " pod="calico-system/whisker-77988f4665-6r7kc" May 17 00:21:47.108155 kubelet[2509]: I0517 00:21:47.108090 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/708b28d1-b868-4f8b-b7c9-b5fa6b493a92-calico-apiserver-certs\") pod \"calico-apiserver-77f86bc66d-9ctqn\" (UID: \"708b28d1-b868-4f8b-b7c9-b5fa6b493a92\") " pod="calico-apiserver/calico-apiserver-77f86bc66d-9ctqn" May 17 00:21:47.108155 kubelet[2509]: I0517 00:21:47.108103 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66e07ccd-2dbe-42fe-bd10-e349fb811eb6-tigera-ca-bundle\") pod \"calico-kube-controllers-7998fc854-4sfsk\" (UID: \"66e07ccd-2dbe-42fe-bd10-e349fb811eb6\") " pod="calico-system/calico-kube-controllers-7998fc854-4sfsk" May 17 00:21:47.108155 kubelet[2509]: I0517 00:21:47.108120 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f18c687-4cb5-49f2-9647-374af2e4bff4-config\") pod \"goldmane-78d55f7ddc-htsn7\" (UID: \"4f18c687-4cb5-49f2-9647-374af2e4bff4\") " pod="calico-system/goldmane-78d55f7ddc-htsn7" May 17 00:21:47.108259 kubelet[2509]: I0517 00:21:47.108133 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/4f18c687-4cb5-49f2-9647-374af2e4bff4-goldmane-key-pair\") pod \"goldmane-78d55f7ddc-htsn7\" (UID: \"4f18c687-4cb5-49f2-9647-374af2e4bff4\") " pod="calico-system/goldmane-78d55f7ddc-htsn7" May 17 00:21:47.322528 containerd[1455]: time="2025-05-17T00:21:47.321818359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77988f4665-6r7kc,Uid:38c66a96-d94f-40c8-93ac-4e0202591244,Namespace:calico-system,Attempt:0,}" May 17 00:21:47.329965 kubelet[2509]: E0517 00:21:47.329080 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:21:47.330035 containerd[1455]: time="2025-05-17T00:21:47.329896185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q7w6q,Uid:1ddd81ac-9fd2-4e37-83a9-b3a3b4011761,Namespace:kube-system,Attempt:0,}" May 17 00:21:47.343777 kubelet[2509]: E0517 00:21:47.343759 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:21:47.352817 containerd[1455]: time="2025-05-17T00:21:47.352793903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7998fc854-4sfsk,Uid:66e07ccd-2dbe-42fe-bd10-e349fb811eb6,Namespace:calico-system,Attempt:0,}" May 17 00:21:47.353034 containerd[1455]: time="2025-05-17T00:21:47.353015293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g4khv,Uid:eab83464-1af3-4982-95f5-5c46d047a7e6,Namespace:kube-system,Attempt:0,}" May 17 00:21:47.355936 containerd[1455]: time="2025-05-17T00:21:47.355918582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-htsn7,Uid:4f18c687-4cb5-49f2-9647-374af2e4bff4,Namespace:calico-system,Attempt:0,}" May 17 00:21:47.362863 containerd[1455]: time="2025-05-17T00:21:47.362489228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77f86bc66d-bzk7k,Uid:ecdb2099-9206-4f56-bd2f-4d5b7338559a,Namespace:calico-apiserver,Attempt:0,}" May 17 00:21:47.385123 containerd[1455]: time="2025-05-17T00:21:47.385103507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77f86bc66d-9ctqn,Uid:708b28d1-b868-4f8b-b7c9-b5fa6b493a92,Namespace:calico-apiserver,Attempt:0,}" May 17 00:21:47.472626 containerd[1455]: time="2025-05-17T00:21:47.472577713Z" level=error msg="Failed to destroy network for sandbox \"985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:47.473019 containerd[1455]: time="2025-05-17T00:21:47.472990533Z" level=error msg="encountered an error cleaning up failed sandbox \"985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:47.473053 containerd[1455]: time="2025-05-17T00:21:47.473035793Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77988f4665-6r7kc,Uid:38c66a96-d94f-40c8-93ac-4e0202591244,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:47.474560 kubelet[2509]: E0517 00:21:47.473287 2509 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:47.474560 kubelet[2509]: E0517 00:21:47.473353 2509 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-77988f4665-6r7kc" May 17 00:21:47.474560 kubelet[2509]: E0517 00:21:47.473374 2509 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-77988f4665-6r7kc" May 17 00:21:47.474665 kubelet[2509]: E0517 00:21:47.473408 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-77988f4665-6r7kc_calico-system(38c66a96-d94f-40c8-93ac-4e0202591244)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-77988f4665-6r7kc_calico-system(38c66a96-d94f-40c8-93ac-4e0202591244)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-77988f4665-6r7kc" podUID="38c66a96-d94f-40c8-93ac-4e0202591244" May 17 00:21:47.495201 containerd[1455]: time="2025-05-17T00:21:47.495158552Z" level=error msg="Failed to destroy network for sandbox \"d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:47.495547 containerd[1455]: time="2025-05-17T00:21:47.495527712Z" level=error msg="encountered an error cleaning up failed sandbox \"d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:47.496325 containerd[1455]: time="2025-05-17T00:21:47.496305282Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q7w6q,Uid:1ddd81ac-9fd2-4e37-83a9-b3a3b4011761,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:47.497136 kubelet[2509]: E0517 00:21:47.496533 2509 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:47.497136 kubelet[2509]: E0517 00:21:47.496578 2509 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-q7w6q" May 17 00:21:47.497136 kubelet[2509]: E0517 00:21:47.496595 2509 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-q7w6q" May 17 00:21:47.497247 kubelet[2509]: E0517 00:21:47.496624 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-q7w6q_kube-system(1ddd81ac-9fd2-4e37-83a9-b3a3b4011761)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-q7w6q_kube-system(1ddd81ac-9fd2-4e37-83a9-b3a3b4011761)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-q7w6q" podUID="1ddd81ac-9fd2-4e37-83a9-b3a3b4011761" May 17 00:21:47.521150 containerd[1455]: time="2025-05-17T00:21:47.521122849Z" level=error msg="Failed to destroy network for sandbox \"a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:47.521823 containerd[1455]: time="2025-05-17T00:21:47.521785009Z" level=error msg="encountered an error cleaning up failed sandbox \"a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:47.521998 containerd[1455]: time="2025-05-17T00:21:47.521979459Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g4khv,Uid:eab83464-1af3-4982-95f5-5c46d047a7e6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:47.523478 kubelet[2509]: E0517 00:21:47.522327 2509 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:47.523478 kubelet[2509]: E0517 00:21:47.522390 2509 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-g4khv" May 17 00:21:47.523478 kubelet[2509]: E0517 00:21:47.522426 2509 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-g4khv" May 17 00:21:47.523597 kubelet[2509]: E0517 00:21:47.522467 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-g4khv_kube-system(eab83464-1af3-4982-95f5-5c46d047a7e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-g4khv_kube-system(eab83464-1af3-4982-95f5-5c46d047a7e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-g4khv" podUID="eab83464-1af3-4982-95f5-5c46d047a7e6" May 17 00:21:47.528989 containerd[1455]: time="2025-05-17T00:21:47.528946345Z" level=error msg="Failed to destroy network for sandbox \"cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:47.529312 containerd[1455]: time="2025-05-17T00:21:47.529283605Z" level=error msg="encountered an error cleaning up failed sandbox \"cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:47.529355 containerd[1455]: time="2025-05-17T00:21:47.529330935Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77f86bc66d-9ctqn,Uid:708b28d1-b868-4f8b-b7c9-b5fa6b493a92,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:47.531148 kubelet[2509]: E0517 00:21:47.529475 2509 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:47.531148 kubelet[2509]: E0517 00:21:47.529524 2509 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77f86bc66d-9ctqn" May 17 00:21:47.531148 kubelet[2509]: E0517 00:21:47.529537 2509 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77f86bc66d-9ctqn" May 17 00:21:47.531249 containerd[1455]: time="2025-05-17T00:21:47.530314125Z" level=error msg="Failed to destroy network for sandbox \"2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:47.531249 containerd[1455]: time="2025-05-17T00:21:47.530598574Z" level=error msg="encountered an error cleaning up failed sandbox \"2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:47.531249 containerd[1455]: time="2025-05-17T00:21:47.530720584Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7998fc854-4sfsk,Uid:66e07ccd-2dbe-42fe-bd10-e349fb811eb6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:47.531373 kubelet[2509]: E0517 00:21:47.529562 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77f86bc66d-9ctqn_calico-apiserver(708b28d1-b868-4f8b-b7c9-b5fa6b493a92)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77f86bc66d-9ctqn_calico-apiserver(708b28d1-b868-4f8b-b7c9-b5fa6b493a92)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77f86bc66d-9ctqn" podUID="708b28d1-b868-4f8b-b7c9-b5fa6b493a92" May 17 00:21:47.531373 kubelet[2509]: E0517 00:21:47.530880 2509 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:47.531373 kubelet[2509]: E0517 00:21:47.530935 2509 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7998fc854-4sfsk" May 17 00:21:47.531556 kubelet[2509]: E0517 00:21:47.530953 2509 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7998fc854-4sfsk" May 17 00:21:47.531556 kubelet[2509]: E0517 00:21:47.531003 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7998fc854-4sfsk_calico-system(66e07ccd-2dbe-42fe-bd10-e349fb811eb6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7998fc854-4sfsk_calico-system(66e07ccd-2dbe-42fe-bd10-e349fb811eb6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7998fc854-4sfsk" podUID="66e07ccd-2dbe-42fe-bd10-e349fb811eb6" May 17 00:21:47.540776 containerd[1455]: time="2025-05-17T00:21:47.540754809Z" level=error msg="Failed to destroy network for sandbox \"864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:47.541118 containerd[1455]: time="2025-05-17T00:21:47.541095029Z" level=error msg="encountered an error cleaning up failed sandbox \"864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:47.542250 containerd[1455]: time="2025-05-17T00:21:47.542226269Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-htsn7,Uid:4f18c687-4cb5-49f2-9647-374af2e4bff4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:47.542408 kubelet[2509]: E0517 00:21:47.542392 2509 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:47.542482 kubelet[2509]: E0517 00:21:47.542470 2509 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-htsn7" May 17 00:21:47.542564 kubelet[2509]: E0517 00:21:47.542552 2509 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-htsn7" May 17 00:21:47.542655 kubelet[2509]: E0517 00:21:47.542639 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-78d55f7ddc-htsn7_calico-system(4f18c687-4cb5-49f2-9647-374af2e4bff4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-78d55f7ddc-htsn7_calico-system(4f18c687-4cb5-49f2-9647-374af2e4bff4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-htsn7" podUID="4f18c687-4cb5-49f2-9647-374af2e4bff4" May 17 00:21:47.547290 containerd[1455]: time="2025-05-17T00:21:47.547257046Z" level=error msg="Failed to destroy network for sandbox \"6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:47.547548 containerd[1455]: time="2025-05-17T00:21:47.547526766Z" level=error msg="encountered an error cleaning up failed sandbox \"6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:47.547574 containerd[1455]: time="2025-05-17T00:21:47.547560816Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77f86bc66d-bzk7k,Uid:ecdb2099-9206-4f56-bd2f-4d5b7338559a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:47.547718 kubelet[2509]: E0517 00:21:47.547685 2509 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:47.547758 kubelet[2509]: E0517 00:21:47.547722 2509 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77f86bc66d-bzk7k" May 17 00:21:47.547758 kubelet[2509]: E0517 00:21:47.547737 2509 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77f86bc66d-bzk7k" May 17 00:21:47.547802 kubelet[2509]: E0517 00:21:47.547767 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77f86bc66d-bzk7k_calico-apiserver(ecdb2099-9206-4f56-bd2f-4d5b7338559a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77f86bc66d-bzk7k_calico-apiserver(ecdb2099-9206-4f56-bd2f-4d5b7338559a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77f86bc66d-bzk7k" podUID="ecdb2099-9206-4f56-bd2f-4d5b7338559a" May 17 00:21:47.847066 systemd[1]: Created slice kubepods-besteffort-pode793e701_f5aa_4190_a1ec_13776ffa5239.slice - libcontainer container kubepods-besteffort-pode793e701_f5aa_4190_a1ec_13776ffa5239.slice. May 17 00:21:47.850218 containerd[1455]: time="2025-05-17T00:21:47.850128905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mfhj5,Uid:e793e701-f5aa-4190-a1ec-13776ffa5239,Namespace:calico-system,Attempt:0,}" May 17 00:21:47.907431 containerd[1455]: time="2025-05-17T00:21:47.907373606Z" level=error msg="Failed to destroy network for sandbox \"438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:47.907837 containerd[1455]: time="2025-05-17T00:21:47.907799056Z" level=error msg="encountered an error cleaning up failed sandbox \"438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:47.907882 containerd[1455]: time="2025-05-17T00:21:47.907855776Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mfhj5,Uid:e793e701-f5aa-4190-a1ec-13776ffa5239,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:47.908149 kubelet[2509]: E0517 00:21:47.908088 2509 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:47.908149 kubelet[2509]: E0517 00:21:47.908143 2509 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mfhj5" May 17 00:21:47.908382 kubelet[2509]: E0517 00:21:47.908161 2509 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mfhj5" May 17 00:21:47.908382 kubelet[2509]: E0517 00:21:47.908230 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mfhj5_calico-system(e793e701-f5aa-4190-a1ec-13776ffa5239)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mfhj5_calico-system(e793e701-f5aa-4190-a1ec-13776ffa5239)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mfhj5" podUID="e793e701-f5aa-4190-a1ec-13776ffa5239" May 17 00:21:47.912431 kubelet[2509]: I0517 00:21:47.912399 2509 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3" May 17 00:21:47.915025 containerd[1455]: time="2025-05-17T00:21:47.914990102Z" level=info msg="StopPodSandbox for \"2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3\"" May 17 00:21:47.915211 containerd[1455]: time="2025-05-17T00:21:47.915168782Z" level=info msg="Ensure that sandbox 2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3 in task-service has been cleanup successfully" May 17 00:21:47.916812 kubelet[2509]: I0517 00:21:47.916607 2509 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502" May 17 00:21:47.917189 containerd[1455]: time="2025-05-17T00:21:47.917020871Z" level=info msg="StopPodSandbox for \"cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502\"" May 17 00:21:47.917300 containerd[1455]: time="2025-05-17T00:21:47.917267441Z" level=info msg="Ensure that sandbox cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502 in task-service has been cleanup successfully" May 17 00:21:47.918802 kubelet[2509]: I0517 00:21:47.918744 2509 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8" May 17 00:21:47.919207 containerd[1455]: time="2025-05-17T00:21:47.919110750Z" level=info msg="StopPodSandbox for \"6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8\"" May 17 00:21:47.919301 containerd[1455]: time="2025-05-17T00:21:47.919271000Z" level=info msg="Ensure that sandbox 6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8 in task-service has been cleanup successfully" May 17 00:21:47.924462 kubelet[2509]: I0517 00:21:47.924416 2509 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058" May 17 00:21:47.926706 containerd[1455]: time="2025-05-17T00:21:47.926678066Z" level=info msg="StopPodSandbox for \"d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058\"" May 17 00:21:47.927371 containerd[1455]: time="2025-05-17T00:21:47.927341376Z" level=info msg="Ensure that sandbox d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058 in task-service has been cleanup successfully" May 17 00:21:47.937607 kubelet[2509]: I0517 00:21:47.937472 2509 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2" May 17 00:21:47.938138 containerd[1455]: time="2025-05-17T00:21:47.938007061Z" level=info msg="StopPodSandbox for \"438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2\"" May 17 00:21:47.938171 containerd[1455]: time="2025-05-17T00:21:47.938153821Z" level=info msg="Ensure that sandbox 438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2 in task-service has been cleanup successfully" May 17 00:21:47.939443 kubelet[2509]: I0517 00:21:47.939409 2509 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1" May 17 00:21:47.939834 containerd[1455]: time="2025-05-17T00:21:47.939768880Z" level=info msg="StopPodSandbox for \"864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1\"" May 17 00:21:47.940031 containerd[1455]: time="2025-05-17T00:21:47.939903900Z" level=info msg="Ensure that sandbox 864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1 in task-service has been cleanup successfully" May 17 00:21:47.940857 containerd[1455]: time="2025-05-17T00:21:47.940742249Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 17 00:21:47.950593 kubelet[2509]: I0517 00:21:47.950359 2509 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46" May 17 00:21:47.951880 containerd[1455]: time="2025-05-17T00:21:47.951418664Z" level=info msg="StopPodSandbox for \"985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46\"" May 17 00:21:47.952599 containerd[1455]: time="2025-05-17T00:21:47.952581883Z" level=info msg="Ensure that sandbox 985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46 in task-service has been cleanup successfully" May 17 00:21:47.956141 kubelet[2509]: I0517 00:21:47.956067 2509 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4" May 17 00:21:47.957407 containerd[1455]: time="2025-05-17T00:21:47.957206931Z" level=info msg="StopPodSandbox for \"a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4\"" May 17 00:21:47.963335 containerd[1455]: time="2025-05-17T00:21:47.958362470Z" level=info msg="Ensure that sandbox a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4 in task-service has been cleanup successfully" May 17 00:21:47.993361 containerd[1455]: time="2025-05-17T00:21:47.993325673Z" level=error msg="StopPodSandbox for \"cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502\" failed" error="failed to destroy network for sandbox \"cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:47.993970 kubelet[2509]: E0517 00:21:47.993681 2509 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502" May 17 00:21:47.993970 kubelet[2509]: E0517 00:21:47.993755 2509 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502"} May 17 00:21:47.993970 kubelet[2509]: E0517 00:21:47.993815 2509 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"708b28d1-b868-4f8b-b7c9-b5fa6b493a92\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:21:47.993970 kubelet[2509]: E0517 00:21:47.993839 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"708b28d1-b868-4f8b-b7c9-b5fa6b493a92\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77f86bc66d-9ctqn" podUID="708b28d1-b868-4f8b-b7c9-b5fa6b493a92" May 17 00:21:48.018496 containerd[1455]: time="2025-05-17T00:21:48.017964421Z" level=error msg="StopPodSandbox for \"2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3\" failed" error="failed to destroy network for sandbox \"2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:48.019096 kubelet[2509]: E0517 00:21:48.019074 2509 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3" May 17 00:21:48.019188 kubelet[2509]: E0517 00:21:48.019160 2509 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3"} May 17 00:21:48.019275 kubelet[2509]: E0517 00:21:48.019262 2509 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"66e07ccd-2dbe-42fe-bd10-e349fb811eb6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:21:48.019369 kubelet[2509]: E0517 00:21:48.019354 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"66e07ccd-2dbe-42fe-bd10-e349fb811eb6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7998fc854-4sfsk" podUID="66e07ccd-2dbe-42fe-bd10-e349fb811eb6" May 17 00:21:48.023014 containerd[1455]: time="2025-05-17T00:21:48.022987138Z" level=error msg="StopPodSandbox for \"a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4\" failed" error="failed to destroy network for sandbox \"a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:48.023241 kubelet[2509]: E0517 00:21:48.023155 2509 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4" May 17 00:21:48.023312 kubelet[2509]: E0517 00:21:48.023301 2509 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4"} May 17 00:21:48.023393 kubelet[2509]: E0517 00:21:48.023380 2509 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"eab83464-1af3-4982-95f5-5c46d047a7e6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:21:48.023487 kubelet[2509]: E0517 00:21:48.023463 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"eab83464-1af3-4982-95f5-5c46d047a7e6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-g4khv" podUID="eab83464-1af3-4982-95f5-5c46d047a7e6" May 17 00:21:48.046745 containerd[1455]: time="2025-05-17T00:21:48.046561766Z" level=error msg="StopPodSandbox for \"985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46\" failed" error="failed to destroy network for sandbox \"985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:48.048167 kubelet[2509]: E0517 00:21:48.048106 2509 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46" May 17 00:21:48.048268 kubelet[2509]: E0517 00:21:48.048227 2509 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46"} May 17 00:21:48.048317 kubelet[2509]: E0517 00:21:48.048286 2509 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"38c66a96-d94f-40c8-93ac-4e0202591244\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:21:48.048365 kubelet[2509]: E0517 00:21:48.048316 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"38c66a96-d94f-40c8-93ac-4e0202591244\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-77988f4665-6r7kc" podUID="38c66a96-d94f-40c8-93ac-4e0202591244" May 17 00:21:48.049834 containerd[1455]: time="2025-05-17T00:21:48.049797815Z" level=error msg="StopPodSandbox for \"864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1\" failed" error="failed to destroy network for sandbox \"864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:48.050519 kubelet[2509]: E0517 00:21:48.049998 2509 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1" May 17 00:21:48.050519 kubelet[2509]: E0517 00:21:48.050033 2509 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1"} May 17 00:21:48.050519 kubelet[2509]: E0517 00:21:48.050055 2509 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4f18c687-4cb5-49f2-9647-374af2e4bff4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:21:48.050519 kubelet[2509]: E0517 00:21:48.050073 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4f18c687-4cb5-49f2-9647-374af2e4bff4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-htsn7" podUID="4f18c687-4cb5-49f2-9647-374af2e4bff4" May 17 00:21:48.054376 containerd[1455]: time="2025-05-17T00:21:48.054296002Z" level=error msg="StopPodSandbox for \"438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2\" failed" error="failed to destroy network for sandbox \"438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:48.054484 kubelet[2509]: E0517 00:21:48.054445 2509 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2" May 17 00:21:48.054536 kubelet[2509]: E0517 00:21:48.054485 2509 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2"} May 17 00:21:48.054536 kubelet[2509]: E0517 00:21:48.054511 2509 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e793e701-f5aa-4190-a1ec-13776ffa5239\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:21:48.054668 kubelet[2509]: E0517 00:21:48.054535 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e793e701-f5aa-4190-a1ec-13776ffa5239\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mfhj5" podUID="e793e701-f5aa-4190-a1ec-13776ffa5239" May 17 00:21:48.054927 containerd[1455]: time="2025-05-17T00:21:48.054896462Z" level=error msg="StopPodSandbox for \"6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8\" failed" error="failed to destroy network for sandbox \"6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:48.055240 kubelet[2509]: E0517 00:21:48.055090 2509 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8" May 17 00:21:48.055240 kubelet[2509]: E0517 00:21:48.055125 2509 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8"} May 17 00:21:48.055240 kubelet[2509]: E0517 00:21:48.055146 2509 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ecdb2099-9206-4f56-bd2f-4d5b7338559a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:21:48.055240 kubelet[2509]: E0517 00:21:48.055165 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ecdb2099-9206-4f56-bd2f-4d5b7338559a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77f86bc66d-bzk7k" podUID="ecdb2099-9206-4f56-bd2f-4d5b7338559a" May 17 00:21:48.055584 containerd[1455]: time="2025-05-17T00:21:48.055536402Z" level=error msg="StopPodSandbox for \"d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058\" failed" error="failed to destroy network for sandbox \"d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:21:48.055745 kubelet[2509]: E0517 00:21:48.055688 2509 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058" May 17 00:21:48.055745 kubelet[2509]: E0517 00:21:48.055730 2509 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058"} May 17 00:21:48.055803 kubelet[2509]: E0517 00:21:48.055754 2509 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1ddd81ac-9fd2-4e37-83a9-b3a3b4011761\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:21:48.055803 kubelet[2509]: E0517 00:21:48.055773 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1ddd81ac-9fd2-4e37-83a9-b3a3b4011761\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-q7w6q" podUID="1ddd81ac-9fd2-4e37-83a9-b3a3b4011761" May 17 00:21:48.336538 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4-shm.mount: Deactivated successfully. May 17 00:21:48.337356 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3-shm.mount: Deactivated successfully. May 17 00:21:48.337443 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058-shm.mount: Deactivated successfully. May 17 00:21:48.337514 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46-shm.mount: Deactivated successfully. May 17 00:21:50.802458 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount873009121.mount: Deactivated successfully. May 17 00:21:50.828037 containerd[1455]: time="2025-05-17T00:21:50.828001095Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:50.828678 containerd[1455]: time="2025-05-17T00:21:50.828635505Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=156396372" May 17 00:21:50.829252 containerd[1455]: time="2025-05-17T00:21:50.829207855Z" level=info msg="ImageCreate event name:\"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:50.830518 containerd[1455]: time="2025-05-17T00:21:50.830499984Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:50.831310 containerd[1455]: time="2025-05-17T00:21:50.830990204Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"156396234\" in 2.890214055s" May 17 00:21:50.831310 containerd[1455]: time="2025-05-17T00:21:50.831020724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\"" May 17 00:21:50.851479 containerd[1455]: time="2025-05-17T00:21:50.851456054Z" level=info msg="CreateContainer within sandbox \"19f19d01ea16de87432089d7a11a22a53924fa385fc486d1335251004241d65b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 17 00:21:50.862399 containerd[1455]: time="2025-05-17T00:21:50.862360608Z" level=info msg="CreateContainer within sandbox \"19f19d01ea16de87432089d7a11a22a53924fa385fc486d1335251004241d65b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"57a96ffc04bfbeb811ded530c0e598685cd5f7415b8a524bf25f7039bdb7b961\"" May 17 00:21:50.863591 containerd[1455]: time="2025-05-17T00:21:50.863533658Z" level=info msg="StartContainer for \"57a96ffc04bfbeb811ded530c0e598685cd5f7415b8a524bf25f7039bdb7b961\"" May 17 00:21:50.886293 systemd[1]: Started cri-containerd-57a96ffc04bfbeb811ded530c0e598685cd5f7415b8a524bf25f7039bdb7b961.scope - libcontainer container 57a96ffc04bfbeb811ded530c0e598685cd5f7415b8a524bf25f7039bdb7b961. May 17 00:21:50.913013 containerd[1455]: time="2025-05-17T00:21:50.912994033Z" level=info msg="StartContainer for \"57a96ffc04bfbeb811ded530c0e598685cd5f7415b8a524bf25f7039bdb7b961\" returns successfully" May 17 00:21:50.983363 kubelet[2509]: I0517 00:21:50.983233 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-fh7nf" podStartSLOduration=0.719640631 podStartE2EDuration="8.980325719s" podCreationTimestamp="2025-05-17 00:21:42 +0000 UTC" firstStartedPulling="2025-05-17 00:21:42.571072545 +0000 UTC m=+19.844699577" lastFinishedPulling="2025-05-17 00:21:50.831757633 +0000 UTC m=+28.105384665" observedRunningTime="2025-05-17 00:21:50.980044219 +0000 UTC m=+28.253671261" watchObservedRunningTime="2025-05-17 00:21:50.980325719 +0000 UTC m=+28.253952761" May 17 00:21:50.994682 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 17 00:21:50.994956 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 17 00:21:51.082297 containerd[1455]: time="2025-05-17T00:21:51.081955148Z" level=info msg="StopPodSandbox for \"985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46\"" May 17 00:21:51.199101 containerd[1455]: 2025-05-17 00:21:51.161 [INFO][3749] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46" May 17 00:21:51.199101 containerd[1455]: 2025-05-17 00:21:51.162 [INFO][3749] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46" iface="eth0" netns="/var/run/netns/cni-7a35828c-23b6-3c07-b3bc-8fac18f35a21" May 17 00:21:51.199101 containerd[1455]: 2025-05-17 00:21:51.162 [INFO][3749] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46" iface="eth0" netns="/var/run/netns/cni-7a35828c-23b6-3c07-b3bc-8fac18f35a21" May 17 00:21:51.199101 containerd[1455]: 2025-05-17 00:21:51.164 [INFO][3749] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46" iface="eth0" netns="/var/run/netns/cni-7a35828c-23b6-3c07-b3bc-8fac18f35a21" May 17 00:21:51.199101 containerd[1455]: 2025-05-17 00:21:51.164 [INFO][3749] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46" May 17 00:21:51.199101 containerd[1455]: 2025-05-17 00:21:51.164 [INFO][3749] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46" May 17 00:21:51.199101 containerd[1455]: 2025-05-17 00:21:51.182 [INFO][3762] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46" HandleID="k8s-pod-network.985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46" Workload="172--233--222--125-k8s-whisker--77988f4665--6r7kc-eth0" May 17 00:21:51.199101 containerd[1455]: 2025-05-17 00:21:51.182 [INFO][3762] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:21:51.199101 containerd[1455]: 2025-05-17 00:21:51.182 [INFO][3762] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:21:51.199101 containerd[1455]: 2025-05-17 00:21:51.191 [WARNING][3762] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46" HandleID="k8s-pod-network.985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46" Workload="172--233--222--125-k8s-whisker--77988f4665--6r7kc-eth0" May 17 00:21:51.199101 containerd[1455]: 2025-05-17 00:21:51.192 [INFO][3762] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46" HandleID="k8s-pod-network.985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46" Workload="172--233--222--125-k8s-whisker--77988f4665--6r7kc-eth0" May 17 00:21:51.199101 containerd[1455]: 2025-05-17 00:21:51.193 [INFO][3762] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:21:51.199101 containerd[1455]: 2025-05-17 00:21:51.196 [INFO][3749] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46" May 17 00:21:51.199817 containerd[1455]: time="2025-05-17T00:21:51.199620889Z" level=info msg="TearDown network for sandbox \"985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46\" successfully" May 17 00:21:51.199817 containerd[1455]: time="2025-05-17T00:21:51.199660049Z" level=info msg="StopPodSandbox for \"985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46\" returns successfully" May 17 00:21:51.238213 kubelet[2509]: I0517 00:21:51.237838 2509 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/38c66a96-d94f-40c8-93ac-4e0202591244-whisker-backend-key-pair\") pod \"38c66a96-d94f-40c8-93ac-4e0202591244\" (UID: \"38c66a96-d94f-40c8-93ac-4e0202591244\") " May 17 00:21:51.238213 kubelet[2509]: I0517 00:21:51.237875 2509 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38c66a96-d94f-40c8-93ac-4e0202591244-whisker-ca-bundle\") pod \"38c66a96-d94f-40c8-93ac-4e0202591244\" (UID: \"38c66a96-d94f-40c8-93ac-4e0202591244\") " May 17 00:21:51.238213 kubelet[2509]: I0517 00:21:51.237892 2509 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxvdp\" (UniqueName: \"kubernetes.io/projected/38c66a96-d94f-40c8-93ac-4e0202591244-kube-api-access-wxvdp\") pod \"38c66a96-d94f-40c8-93ac-4e0202591244\" (UID: \"38c66a96-d94f-40c8-93ac-4e0202591244\") " May 17 00:21:51.238747 kubelet[2509]: I0517 00:21:51.238724 2509 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38c66a96-d94f-40c8-93ac-4e0202591244-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "38c66a96-d94f-40c8-93ac-4e0202591244" (UID: "38c66a96-d94f-40c8-93ac-4e0202591244"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 00:21:51.243234 kubelet[2509]: I0517 00:21:51.243206 2509 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38c66a96-d94f-40c8-93ac-4e0202591244-kube-api-access-wxvdp" (OuterVolumeSpecName: "kube-api-access-wxvdp") pod "38c66a96-d94f-40c8-93ac-4e0202591244" (UID: "38c66a96-d94f-40c8-93ac-4e0202591244"). InnerVolumeSpecName "kube-api-access-wxvdp". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:21:51.243866 kubelet[2509]: I0517 00:21:51.243842 2509 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38c66a96-d94f-40c8-93ac-4e0202591244-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "38c66a96-d94f-40c8-93ac-4e0202591244" (UID: "38c66a96-d94f-40c8-93ac-4e0202591244"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 17 00:21:51.338428 kubelet[2509]: I0517 00:21:51.338350 2509 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38c66a96-d94f-40c8-93ac-4e0202591244-whisker-ca-bundle\") on node \"172-233-222-125\" DevicePath \"\"" May 17 00:21:51.338428 kubelet[2509]: I0517 00:21:51.338379 2509 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wxvdp\" (UniqueName: \"kubernetes.io/projected/38c66a96-d94f-40c8-93ac-4e0202591244-kube-api-access-wxvdp\") on node \"172-233-222-125\" DevicePath \"\"" May 17 00:21:51.338428 kubelet[2509]: I0517 00:21:51.338391 2509 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/38c66a96-d94f-40c8-93ac-4e0202591244-whisker-backend-key-pair\") on node \"172-233-222-125\" DevicePath \"\"" May 17 00:21:51.802536 systemd[1]: run-netns-cni\x2d7a35828c\x2d23b6\x2d3c07\x2db3bc\x2d8fac18f35a21.mount: Deactivated successfully. May 17 00:21:51.802634 systemd[1]: var-lib-kubelet-pods-38c66a96\x2dd94f\x2d40c8\x2d93ac\x2d4e0202591244-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwxvdp.mount: Deactivated successfully. May 17 00:21:51.802697 systemd[1]: var-lib-kubelet-pods-38c66a96\x2dd94f\x2d40c8\x2d93ac\x2d4e0202591244-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 17 00:21:51.974753 systemd[1]: Removed slice kubepods-besteffort-pod38c66a96_d94f_40c8_93ac_4e0202591244.slice - libcontainer container kubepods-besteffort-pod38c66a96_d94f_40c8_93ac_4e0202591244.slice. May 17 00:21:52.022635 systemd[1]: Created slice kubepods-besteffort-pod7b75dfdd_c774_4c10_b431_7a20d6743288.slice - libcontainer container kubepods-besteffort-pod7b75dfdd_c774_4c10_b431_7a20d6743288.slice. May 17 00:21:52.043626 kubelet[2509]: I0517 00:21:52.043559 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96t5r\" (UniqueName: \"kubernetes.io/projected/7b75dfdd-c774-4c10-b431-7a20d6743288-kube-api-access-96t5r\") pod \"whisker-86c8456b49-frszb\" (UID: \"7b75dfdd-c774-4c10-b431-7a20d6743288\") " pod="calico-system/whisker-86c8456b49-frszb" May 17 00:21:52.043626 kubelet[2509]: I0517 00:21:52.043622 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7b75dfdd-c774-4c10-b431-7a20d6743288-whisker-backend-key-pair\") pod \"whisker-86c8456b49-frszb\" (UID: \"7b75dfdd-c774-4c10-b431-7a20d6743288\") " pod="calico-system/whisker-86c8456b49-frszb" May 17 00:21:52.044149 kubelet[2509]: I0517 00:21:52.043774 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b75dfdd-c774-4c10-b431-7a20d6743288-whisker-ca-bundle\") pod \"whisker-86c8456b49-frszb\" (UID: \"7b75dfdd-c774-4c10-b431-7a20d6743288\") " pod="calico-system/whisker-86c8456b49-frszb" May 17 00:21:52.328560 containerd[1455]: time="2025-05-17T00:21:52.328008645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-86c8456b49-frszb,Uid:7b75dfdd-c774-4c10-b431-7a20d6743288,Namespace:calico-system,Attempt:0,}" May 17 00:21:52.455550 systemd-networkd[1381]: caliaa7d6022d33: Link UP May 17 00:21:52.456751 systemd-networkd[1381]: caliaa7d6022d33: Gained carrier May 17 00:21:52.478763 containerd[1455]: 2025-05-17 00:21:52.370 [INFO][3866] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:21:52.478763 containerd[1455]: 2025-05-17 00:21:52.389 [INFO][3866] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--233--222--125-k8s-whisker--86c8456b49--frszb-eth0 whisker-86c8456b49- calico-system 7b75dfdd-c774-4c10-b431-7a20d6743288 917 0 2025-05-17 00:21:52 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:86c8456b49 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-233-222-125 whisker-86c8456b49-frszb eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] caliaa7d6022d33 [] [] }} ContainerID="86d9c5cb34f17d732c222f62f7f68e300349b55cce9eb0f7edfcbc16e4477e49" Namespace="calico-system" Pod="whisker-86c8456b49-frszb" WorkloadEndpoint="172--233--222--125-k8s-whisker--86c8456b49--frszb-" May 17 00:21:52.478763 containerd[1455]: 2025-05-17 00:21:52.389 [INFO][3866] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="86d9c5cb34f17d732c222f62f7f68e300349b55cce9eb0f7edfcbc16e4477e49" Namespace="calico-system" Pod="whisker-86c8456b49-frszb" WorkloadEndpoint="172--233--222--125-k8s-whisker--86c8456b49--frszb-eth0" May 17 00:21:52.478763 containerd[1455]: 2025-05-17 00:21:52.412 [INFO][3901] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="86d9c5cb34f17d732c222f62f7f68e300349b55cce9eb0f7edfcbc16e4477e49" HandleID="k8s-pod-network.86d9c5cb34f17d732c222f62f7f68e300349b55cce9eb0f7edfcbc16e4477e49" Workload="172--233--222--125-k8s-whisker--86c8456b49--frszb-eth0" May 17 00:21:52.478763 containerd[1455]: 2025-05-17 00:21:52.412 [INFO][3901] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="86d9c5cb34f17d732c222f62f7f68e300349b55cce9eb0f7edfcbc16e4477e49" HandleID="k8s-pod-network.86d9c5cb34f17d732c222f62f7f68e300349b55cce9eb0f7edfcbc16e4477e49" Workload="172--233--222--125-k8s-whisker--86c8456b49--frszb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9890), Attrs:map[string]string{"namespace":"calico-system", "node":"172-233-222-125", "pod":"whisker-86c8456b49-frszb", "timestamp":"2025-05-17 00:21:52.412564193 +0000 UTC"}, Hostname:"172-233-222-125", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:21:52.478763 containerd[1455]: 2025-05-17 00:21:52.412 [INFO][3901] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:21:52.478763 containerd[1455]: 2025-05-17 00:21:52.412 [INFO][3901] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:21:52.478763 containerd[1455]: 2025-05-17 00:21:52.412 [INFO][3901] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-233-222-125' May 17 00:21:52.478763 containerd[1455]: 2025-05-17 00:21:52.419 [INFO][3901] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.86d9c5cb34f17d732c222f62f7f68e300349b55cce9eb0f7edfcbc16e4477e49" host="172-233-222-125" May 17 00:21:52.478763 containerd[1455]: 2025-05-17 00:21:52.423 [INFO][3901] ipam/ipam.go 394: Looking up existing affinities for host host="172-233-222-125" May 17 00:21:52.478763 containerd[1455]: 2025-05-17 00:21:52.426 [INFO][3901] ipam/ipam.go 511: Trying affinity for 192.168.33.128/26 host="172-233-222-125" May 17 00:21:52.478763 containerd[1455]: 2025-05-17 00:21:52.428 [INFO][3901] ipam/ipam.go 158: Attempting to load block cidr=192.168.33.128/26 host="172-233-222-125" May 17 00:21:52.478763 containerd[1455]: 2025-05-17 00:21:52.430 [INFO][3901] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.33.128/26 host="172-233-222-125" May 17 00:21:52.478763 containerd[1455]: 2025-05-17 00:21:52.430 [INFO][3901] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.33.128/26 handle="k8s-pod-network.86d9c5cb34f17d732c222f62f7f68e300349b55cce9eb0f7edfcbc16e4477e49" host="172-233-222-125" May 17 00:21:52.478763 containerd[1455]: 2025-05-17 00:21:52.431 [INFO][3901] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.86d9c5cb34f17d732c222f62f7f68e300349b55cce9eb0f7edfcbc16e4477e49 May 17 00:21:52.478763 containerd[1455]: 2025-05-17 00:21:52.434 [INFO][3901] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.33.128/26 handle="k8s-pod-network.86d9c5cb34f17d732c222f62f7f68e300349b55cce9eb0f7edfcbc16e4477e49" host="172-233-222-125" May 17 00:21:52.478763 containerd[1455]: 2025-05-17 00:21:52.438 [INFO][3901] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.33.129/26] block=192.168.33.128/26 handle="k8s-pod-network.86d9c5cb34f17d732c222f62f7f68e300349b55cce9eb0f7edfcbc16e4477e49" host="172-233-222-125" May 17 00:21:52.478763 containerd[1455]: 2025-05-17 00:21:52.438 [INFO][3901] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.33.129/26] handle="k8s-pod-network.86d9c5cb34f17d732c222f62f7f68e300349b55cce9eb0f7edfcbc16e4477e49" host="172-233-222-125" May 17 00:21:52.478763 containerd[1455]: 2025-05-17 00:21:52.439 [INFO][3901] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:21:52.478763 containerd[1455]: 2025-05-17 00:21:52.439 [INFO][3901] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.33.129/26] IPv6=[] ContainerID="86d9c5cb34f17d732c222f62f7f68e300349b55cce9eb0f7edfcbc16e4477e49" HandleID="k8s-pod-network.86d9c5cb34f17d732c222f62f7f68e300349b55cce9eb0f7edfcbc16e4477e49" Workload="172--233--222--125-k8s-whisker--86c8456b49--frszb-eth0" May 17 00:21:52.479280 containerd[1455]: 2025-05-17 00:21:52.442 [INFO][3866] cni-plugin/k8s.go 418: Populated endpoint ContainerID="86d9c5cb34f17d732c222f62f7f68e300349b55cce9eb0f7edfcbc16e4477e49" Namespace="calico-system" Pod="whisker-86c8456b49-frszb" WorkloadEndpoint="172--233--222--125-k8s-whisker--86c8456b49--frszb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--125-k8s-whisker--86c8456b49--frszb-eth0", GenerateName:"whisker-86c8456b49-", Namespace:"calico-system", SelfLink:"", UID:"7b75dfdd-c774-4c10-b431-7a20d6743288", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"86c8456b49", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-125", ContainerID:"", Pod:"whisker-86c8456b49-frszb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.33.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliaa7d6022d33", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:21:52.479280 containerd[1455]: 2025-05-17 00:21:52.442 [INFO][3866] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.33.129/32] ContainerID="86d9c5cb34f17d732c222f62f7f68e300349b55cce9eb0f7edfcbc16e4477e49" Namespace="calico-system" Pod="whisker-86c8456b49-frszb" WorkloadEndpoint="172--233--222--125-k8s-whisker--86c8456b49--frszb-eth0" May 17 00:21:52.479280 containerd[1455]: 2025-05-17 00:21:52.443 [INFO][3866] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaa7d6022d33 ContainerID="86d9c5cb34f17d732c222f62f7f68e300349b55cce9eb0f7edfcbc16e4477e49" Namespace="calico-system" Pod="whisker-86c8456b49-frszb" WorkloadEndpoint="172--233--222--125-k8s-whisker--86c8456b49--frszb-eth0" May 17 00:21:52.479280 containerd[1455]: 2025-05-17 00:21:52.456 [INFO][3866] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="86d9c5cb34f17d732c222f62f7f68e300349b55cce9eb0f7edfcbc16e4477e49" Namespace="calico-system" Pod="whisker-86c8456b49-frszb" WorkloadEndpoint="172--233--222--125-k8s-whisker--86c8456b49--frszb-eth0" May 17 00:21:52.479280 containerd[1455]: 2025-05-17 00:21:52.457 [INFO][3866] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="86d9c5cb34f17d732c222f62f7f68e300349b55cce9eb0f7edfcbc16e4477e49" Namespace="calico-system" Pod="whisker-86c8456b49-frszb" WorkloadEndpoint="172--233--222--125-k8s-whisker--86c8456b49--frszb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--125-k8s-whisker--86c8456b49--frszb-eth0", GenerateName:"whisker-86c8456b49-", Namespace:"calico-system", SelfLink:"", UID:"7b75dfdd-c774-4c10-b431-7a20d6743288", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"86c8456b49", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-125", ContainerID:"86d9c5cb34f17d732c222f62f7f68e300349b55cce9eb0f7edfcbc16e4477e49", Pod:"whisker-86c8456b49-frszb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.33.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliaa7d6022d33", MAC:"82:b9:56:0f:3b:95", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:21:52.479280 containerd[1455]: 2025-05-17 00:21:52.470 [INFO][3866] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="86d9c5cb34f17d732c222f62f7f68e300349b55cce9eb0f7edfcbc16e4477e49" Namespace="calico-system" Pod="whisker-86c8456b49-frszb" WorkloadEndpoint="172--233--222--125-k8s-whisker--86c8456b49--frszb-eth0" May 17 00:21:52.508924 containerd[1455]: time="2025-05-17T00:21:52.508621025Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:21:52.508924 containerd[1455]: time="2025-05-17T00:21:52.508703595Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:21:52.508924 containerd[1455]: time="2025-05-17T00:21:52.508783075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:52.509525 containerd[1455]: time="2025-05-17T00:21:52.509431074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:52.530425 systemd[1]: Started cri-containerd-86d9c5cb34f17d732c222f62f7f68e300349b55cce9eb0f7edfcbc16e4477e49.scope - libcontainer container 86d9c5cb34f17d732c222f62f7f68e300349b55cce9eb0f7edfcbc16e4477e49. May 17 00:21:52.585336 containerd[1455]: time="2025-05-17T00:21:52.585161826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-86c8456b49-frszb,Uid:7b75dfdd-c774-4c10-b431-7a20d6743288,Namespace:calico-system,Attempt:0,} returns sandbox id \"86d9c5cb34f17d732c222f62f7f68e300349b55cce9eb0f7edfcbc16e4477e49\"" May 17 00:21:52.588436 containerd[1455]: time="2025-05-17T00:21:52.588306145Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:21:52.685199 containerd[1455]: time="2025-05-17T00:21:52.685142826Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:21:52.686300 containerd[1455]: time="2025-05-17T00:21:52.686236766Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:21:52.686437 containerd[1455]: time="2025-05-17T00:21:52.686327766Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:21:52.686497 kubelet[2509]: E0517 00:21:52.686441 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:21:52.686497 kubelet[2509]: E0517 00:21:52.686490 2509 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:21:52.686670 kubelet[2509]: E0517 00:21:52.686605 2509 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:839122e2a12b4271ae6fd9949780c33e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-96t5r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-86c8456b49-frszb_calico-system(7b75dfdd-c774-4c10-b431-7a20d6743288): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:21:52.688736 containerd[1455]: time="2025-05-17T00:21:52.688707445Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:21:52.798354 containerd[1455]: time="2025-05-17T00:21:52.798275270Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:21:52.799666 containerd[1455]: time="2025-05-17T00:21:52.799582549Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:21:52.799666 containerd[1455]: time="2025-05-17T00:21:52.799626779Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:21:52.800098 kubelet[2509]: E0517 00:21:52.799859 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:21:52.800098 kubelet[2509]: E0517 00:21:52.799921 2509 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:21:52.800215 kubelet[2509]: E0517 00:21:52.800036 2509 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-96t5r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-86c8456b49-frszb_calico-system(7b75dfdd-c774-4c10-b431-7a20d6743288): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:21:52.803306 kubelet[2509]: E0517 00:21:52.803257 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-86c8456b49-frszb" podUID="7b75dfdd-c774-4c10-b431-7a20d6743288" May 17 00:21:52.842977 kubelet[2509]: I0517 00:21:52.842875 2509 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38c66a96-d94f-40c8-93ac-4e0202591244" path="/var/lib/kubelet/pods/38c66a96-d94f-40c8-93ac-4e0202591244/volumes" May 17 00:21:52.971212 kubelet[2509]: E0517 00:21:52.970918 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-86c8456b49-frszb" podUID="7b75dfdd-c774-4c10-b431-7a20d6743288" May 17 00:21:53.973689 kubelet[2509]: E0517 00:21:53.973601 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-86c8456b49-frszb" podUID="7b75dfdd-c774-4c10-b431-7a20d6743288" May 17 00:21:54.004514 systemd-networkd[1381]: caliaa7d6022d33: Gained IPv6LL May 17 00:21:57.204006 kubelet[2509]: I0517 00:21:57.203311 2509 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:21:57.204006 kubelet[2509]: E0517 00:21:57.203739 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:21:57.800224 kernel: bpftool[4073]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 17 00:21:57.979894 kubelet[2509]: E0517 00:21:57.979844 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:21:58.141241 systemd-networkd[1381]: vxlan.calico: Link UP May 17 00:21:58.141250 systemd-networkd[1381]: vxlan.calico: Gained carrier May 17 00:21:58.841352 containerd[1455]: time="2025-05-17T00:21:58.840911808Z" level=info msg="StopPodSandbox for \"d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058\"" May 17 00:21:58.924021 containerd[1455]: 2025-05-17 00:21:58.880 [INFO][4191] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058" May 17 00:21:58.924021 containerd[1455]: 2025-05-17 00:21:58.880 [INFO][4191] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058" iface="eth0" netns="/var/run/netns/cni-d8772339-b442-28f6-7467-9713519ea938" May 17 00:21:58.924021 containerd[1455]: 2025-05-17 00:21:58.881 [INFO][4191] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058" iface="eth0" netns="/var/run/netns/cni-d8772339-b442-28f6-7467-9713519ea938" May 17 00:21:58.924021 containerd[1455]: 2025-05-17 00:21:58.881 [INFO][4191] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058" iface="eth0" netns="/var/run/netns/cni-d8772339-b442-28f6-7467-9713519ea938" May 17 00:21:58.924021 containerd[1455]: 2025-05-17 00:21:58.881 [INFO][4191] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058" May 17 00:21:58.924021 containerd[1455]: 2025-05-17 00:21:58.881 [INFO][4191] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058" May 17 00:21:58.924021 containerd[1455]: 2025-05-17 00:21:58.913 [INFO][4198] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058" HandleID="k8s-pod-network.d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058" Workload="172--233--222--125-k8s-coredns--668d6bf9bc--q7w6q-eth0" May 17 00:21:58.924021 containerd[1455]: 2025-05-17 00:21:58.913 [INFO][4198] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:21:58.924021 containerd[1455]: 2025-05-17 00:21:58.913 [INFO][4198] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:21:58.924021 containerd[1455]: 2025-05-17 00:21:58.918 [WARNING][4198] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058" HandleID="k8s-pod-network.d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058" Workload="172--233--222--125-k8s-coredns--668d6bf9bc--q7w6q-eth0" May 17 00:21:58.924021 containerd[1455]: 2025-05-17 00:21:58.918 [INFO][4198] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058" HandleID="k8s-pod-network.d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058" Workload="172--233--222--125-k8s-coredns--668d6bf9bc--q7w6q-eth0" May 17 00:21:58.924021 containerd[1455]: 2025-05-17 00:21:58.919 [INFO][4198] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:21:58.924021 containerd[1455]: 2025-05-17 00:21:58.921 [INFO][4191] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058" May 17 00:21:58.929249 containerd[1455]: time="2025-05-17T00:21:58.924226566Z" level=info msg="TearDown network for sandbox \"d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058\" successfully" May 17 00:21:58.929249 containerd[1455]: time="2025-05-17T00:21:58.924255926Z" level=info msg="StopPodSandbox for \"d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058\" returns successfully" May 17 00:21:58.929249 containerd[1455]: time="2025-05-17T00:21:58.926896145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q7w6q,Uid:1ddd81ac-9fd2-4e37-83a9-b3a3b4011761,Namespace:kube-system,Attempt:1,}" May 17 00:21:58.928491 systemd[1]: run-netns-cni\x2dd8772339\x2db442\x2d28f6\x2d7467\x2d9713519ea938.mount: Deactivated successfully. May 17 00:21:58.930290 kubelet[2509]: E0517 00:21:58.926350 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:21:59.015357 systemd-networkd[1381]: cali104398033cc: Link UP May 17 00:21:59.015571 systemd-networkd[1381]: cali104398033cc: Gained carrier May 17 00:21:59.028696 containerd[1455]: 2025-05-17 00:21:58.965 [INFO][4208] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--233--222--125-k8s-coredns--668d6bf9bc--q7w6q-eth0 coredns-668d6bf9bc- kube-system 1ddd81ac-9fd2-4e37-83a9-b3a3b4011761 965 0 2025-05-17 00:21:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-233-222-125 coredns-668d6bf9bc-q7w6q eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali104398033cc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="759921b1f96ad776a013a7d07e55433ad459030f1652f84aa581756221eb5283" Namespace="kube-system" Pod="coredns-668d6bf9bc-q7w6q" WorkloadEndpoint="172--233--222--125-k8s-coredns--668d6bf9bc--q7w6q-" May 17 00:21:59.028696 containerd[1455]: 2025-05-17 00:21:58.965 [INFO][4208] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="759921b1f96ad776a013a7d07e55433ad459030f1652f84aa581756221eb5283" Namespace="kube-system" Pod="coredns-668d6bf9bc-q7w6q" WorkloadEndpoint="172--233--222--125-k8s-coredns--668d6bf9bc--q7w6q-eth0" May 17 00:21:59.028696 containerd[1455]: 2025-05-17 00:21:58.985 [INFO][4216] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="759921b1f96ad776a013a7d07e55433ad459030f1652f84aa581756221eb5283" HandleID="k8s-pod-network.759921b1f96ad776a013a7d07e55433ad459030f1652f84aa581756221eb5283" Workload="172--233--222--125-k8s-coredns--668d6bf9bc--q7w6q-eth0" May 17 00:21:59.028696 containerd[1455]: 2025-05-17 00:21:58.985 [INFO][4216] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="759921b1f96ad776a013a7d07e55433ad459030f1652f84aa581756221eb5283" HandleID="k8s-pod-network.759921b1f96ad776a013a7d07e55433ad459030f1652f84aa581756221eb5283" Workload="172--233--222--125-k8s-coredns--668d6bf9bc--q7w6q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad070), Attrs:map[string]string{"namespace":"kube-system", "node":"172-233-222-125", "pod":"coredns-668d6bf9bc-q7w6q", "timestamp":"2025-05-17 00:21:58.985726505 +0000 UTC"}, Hostname:"172-233-222-125", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:21:59.028696 containerd[1455]: 2025-05-17 00:21:58.985 [INFO][4216] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:21:59.028696 containerd[1455]: 2025-05-17 00:21:58.985 [INFO][4216] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:21:59.028696 containerd[1455]: 2025-05-17 00:21:58.985 [INFO][4216] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-233-222-125' May 17 00:21:59.028696 containerd[1455]: 2025-05-17 00:21:58.990 [INFO][4216] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.759921b1f96ad776a013a7d07e55433ad459030f1652f84aa581756221eb5283" host="172-233-222-125" May 17 00:21:59.028696 containerd[1455]: 2025-05-17 00:21:58.993 [INFO][4216] ipam/ipam.go 394: Looking up existing affinities for host host="172-233-222-125" May 17 00:21:59.028696 containerd[1455]: 2025-05-17 00:21:58.996 [INFO][4216] ipam/ipam.go 511: Trying affinity for 192.168.33.128/26 host="172-233-222-125" May 17 00:21:59.028696 containerd[1455]: 2025-05-17 00:21:58.998 [INFO][4216] ipam/ipam.go 158: Attempting to load block cidr=192.168.33.128/26 host="172-233-222-125" May 17 00:21:59.028696 containerd[1455]: 2025-05-17 00:21:59.000 [INFO][4216] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.33.128/26 host="172-233-222-125" May 17 00:21:59.028696 containerd[1455]: 2025-05-17 00:21:59.000 [INFO][4216] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.33.128/26 handle="k8s-pod-network.759921b1f96ad776a013a7d07e55433ad459030f1652f84aa581756221eb5283" host="172-233-222-125" May 17 00:21:59.028696 containerd[1455]: 2025-05-17 00:21:59.001 [INFO][4216] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.759921b1f96ad776a013a7d07e55433ad459030f1652f84aa581756221eb5283 May 17 00:21:59.028696 containerd[1455]: 2025-05-17 00:21:59.003 [INFO][4216] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.33.128/26 handle="k8s-pod-network.759921b1f96ad776a013a7d07e55433ad459030f1652f84aa581756221eb5283" host="172-233-222-125" May 17 00:21:59.028696 containerd[1455]: 2025-05-17 00:21:59.007 [INFO][4216] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.33.130/26] block=192.168.33.128/26 handle="k8s-pod-network.759921b1f96ad776a013a7d07e55433ad459030f1652f84aa581756221eb5283" host="172-233-222-125" May 17 00:21:59.028696 containerd[1455]: 2025-05-17 00:21:59.007 [INFO][4216] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.33.130/26] handle="k8s-pod-network.759921b1f96ad776a013a7d07e55433ad459030f1652f84aa581756221eb5283" host="172-233-222-125" May 17 00:21:59.028696 containerd[1455]: 2025-05-17 00:21:59.007 [INFO][4216] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:21:59.028696 containerd[1455]: 2025-05-17 00:21:59.007 [INFO][4216] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.33.130/26] IPv6=[] ContainerID="759921b1f96ad776a013a7d07e55433ad459030f1652f84aa581756221eb5283" HandleID="k8s-pod-network.759921b1f96ad776a013a7d07e55433ad459030f1652f84aa581756221eb5283" Workload="172--233--222--125-k8s-coredns--668d6bf9bc--q7w6q-eth0" May 17 00:21:59.029169 containerd[1455]: 2025-05-17 00:21:59.009 [INFO][4208] cni-plugin/k8s.go 418: Populated endpoint ContainerID="759921b1f96ad776a013a7d07e55433ad459030f1652f84aa581756221eb5283" Namespace="kube-system" Pod="coredns-668d6bf9bc-q7w6q" WorkloadEndpoint="172--233--222--125-k8s-coredns--668d6bf9bc--q7w6q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--125-k8s-coredns--668d6bf9bc--q7w6q-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1ddd81ac-9fd2-4e37-83a9-b3a3b4011761", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-125", ContainerID:"", Pod:"coredns-668d6bf9bc-q7w6q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.33.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali104398033cc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:21:59.029169 containerd[1455]: 2025-05-17 00:21:59.009 [INFO][4208] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.33.130/32] ContainerID="759921b1f96ad776a013a7d07e55433ad459030f1652f84aa581756221eb5283" Namespace="kube-system" Pod="coredns-668d6bf9bc-q7w6q" WorkloadEndpoint="172--233--222--125-k8s-coredns--668d6bf9bc--q7w6q-eth0" May 17 00:21:59.029169 containerd[1455]: 2025-05-17 00:21:59.009 [INFO][4208] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali104398033cc ContainerID="759921b1f96ad776a013a7d07e55433ad459030f1652f84aa581756221eb5283" Namespace="kube-system" Pod="coredns-668d6bf9bc-q7w6q" WorkloadEndpoint="172--233--222--125-k8s-coredns--668d6bf9bc--q7w6q-eth0" May 17 00:21:59.029169 containerd[1455]: 2025-05-17 00:21:59.014 [INFO][4208] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="759921b1f96ad776a013a7d07e55433ad459030f1652f84aa581756221eb5283" Namespace="kube-system" Pod="coredns-668d6bf9bc-q7w6q" WorkloadEndpoint="172--233--222--125-k8s-coredns--668d6bf9bc--q7w6q-eth0" May 17 00:21:59.029169 containerd[1455]: 2025-05-17 00:21:59.014 [INFO][4208] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="759921b1f96ad776a013a7d07e55433ad459030f1652f84aa581756221eb5283" Namespace="kube-system" Pod="coredns-668d6bf9bc-q7w6q" WorkloadEndpoint="172--233--222--125-k8s-coredns--668d6bf9bc--q7w6q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--125-k8s-coredns--668d6bf9bc--q7w6q-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1ddd81ac-9fd2-4e37-83a9-b3a3b4011761", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-125", ContainerID:"759921b1f96ad776a013a7d07e55433ad459030f1652f84aa581756221eb5283", Pod:"coredns-668d6bf9bc-q7w6q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.33.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali104398033cc", MAC:"62:b8:a6:73:f4:b8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:21:59.029169 containerd[1455]: 2025-05-17 00:21:59.021 [INFO][4208] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="759921b1f96ad776a013a7d07e55433ad459030f1652f84aa581756221eb5283" Namespace="kube-system" Pod="coredns-668d6bf9bc-q7w6q" WorkloadEndpoint="172--233--222--125-k8s-coredns--668d6bf9bc--q7w6q-eth0" May 17 00:21:59.049679 containerd[1455]: time="2025-05-17T00:21:59.049490684Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:21:59.049679 containerd[1455]: time="2025-05-17T00:21:59.049536274Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:21:59.049679 containerd[1455]: time="2025-05-17T00:21:59.049545244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:59.049679 containerd[1455]: time="2025-05-17T00:21:59.049616673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:59.072302 systemd[1]: Started cri-containerd-759921b1f96ad776a013a7d07e55433ad459030f1652f84aa581756221eb5283.scope - libcontainer container 759921b1f96ad776a013a7d07e55433ad459030f1652f84aa581756221eb5283. May 17 00:21:59.111347 containerd[1455]: time="2025-05-17T00:21:59.111262243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q7w6q,Uid:1ddd81ac-9fd2-4e37-83a9-b3a3b4011761,Namespace:kube-system,Attempt:1,} returns sandbox id \"759921b1f96ad776a013a7d07e55433ad459030f1652f84aa581756221eb5283\"" May 17 00:21:59.112331 kubelet[2509]: E0517 00:21:59.112291 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:21:59.118528 containerd[1455]: time="2025-05-17T00:21:59.118391329Z" level=info msg="CreateContainer within sandbox \"759921b1f96ad776a013a7d07e55433ad459030f1652f84aa581756221eb5283\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:21:59.127915 containerd[1455]: time="2025-05-17T00:21:59.127883234Z" level=info msg="CreateContainer within sandbox \"759921b1f96ad776a013a7d07e55433ad459030f1652f84aa581756221eb5283\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6cefb0e444bcba71fcbb636236eb145d273e17f710bbade14014db798c58f858\"" May 17 00:21:59.128867 containerd[1455]: time="2025-05-17T00:21:59.128247864Z" level=info msg="StartContainer for \"6cefb0e444bcba71fcbb636236eb145d273e17f710bbade14014db798c58f858\"" May 17 00:21:59.158305 systemd[1]: Started cri-containerd-6cefb0e444bcba71fcbb636236eb145d273e17f710bbade14014db798c58f858.scope - libcontainer container 6cefb0e444bcba71fcbb636236eb145d273e17f710bbade14014db798c58f858. May 17 00:21:59.180547 containerd[1455]: time="2025-05-17T00:21:59.180517708Z" level=info msg="StartContainer for \"6cefb0e444bcba71fcbb636236eb145d273e17f710bbade14014db798c58f858\" returns successfully" May 17 00:21:59.841237 containerd[1455]: time="2025-05-17T00:21:59.841063858Z" level=info msg="StopPodSandbox for \"6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8\"" May 17 00:21:59.841840 containerd[1455]: time="2025-05-17T00:21:59.841289638Z" level=info msg="StopPodSandbox for \"438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2\"" May 17 00:21:59.924039 containerd[1455]: 2025-05-17 00:21:59.887 [INFO][4327] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2" May 17 00:21:59.924039 containerd[1455]: 2025-05-17 00:21:59.889 [INFO][4327] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2" iface="eth0" netns="/var/run/netns/cni-3bb1f790-1835-b240-0c2f-f336771fb265" May 17 00:21:59.924039 containerd[1455]: 2025-05-17 00:21:59.889 [INFO][4327] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2" iface="eth0" netns="/var/run/netns/cni-3bb1f790-1835-b240-0c2f-f336771fb265" May 17 00:21:59.924039 containerd[1455]: 2025-05-17 00:21:59.890 [INFO][4327] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2" iface="eth0" netns="/var/run/netns/cni-3bb1f790-1835-b240-0c2f-f336771fb265" May 17 00:21:59.924039 containerd[1455]: 2025-05-17 00:21:59.890 [INFO][4327] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2" May 17 00:21:59.924039 containerd[1455]: 2025-05-17 00:21:59.890 [INFO][4327] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2" May 17 00:21:59.924039 containerd[1455]: 2025-05-17 00:21:59.913 [INFO][4341] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2" HandleID="k8s-pod-network.438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2" Workload="172--233--222--125-k8s-csi--node--driver--mfhj5-eth0" May 17 00:21:59.924039 containerd[1455]: 2025-05-17 00:21:59.913 [INFO][4341] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:21:59.924039 containerd[1455]: 2025-05-17 00:21:59.913 [INFO][4341] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:21:59.924039 containerd[1455]: 2025-05-17 00:21:59.918 [WARNING][4341] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2" HandleID="k8s-pod-network.438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2" Workload="172--233--222--125-k8s-csi--node--driver--mfhj5-eth0" May 17 00:21:59.924039 containerd[1455]: 2025-05-17 00:21:59.918 [INFO][4341] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2" HandleID="k8s-pod-network.438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2" Workload="172--233--222--125-k8s-csi--node--driver--mfhj5-eth0" May 17 00:21:59.924039 containerd[1455]: 2025-05-17 00:21:59.920 [INFO][4341] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:21:59.924039 containerd[1455]: 2025-05-17 00:21:59.922 [INFO][4327] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2" May 17 00:21:59.928416 containerd[1455]: time="2025-05-17T00:21:59.927633834Z" level=info msg="TearDown network for sandbox \"438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2\" successfully" May 17 00:21:59.928416 containerd[1455]: time="2025-05-17T00:21:59.927664004Z" level=info msg="StopPodSandbox for \"438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2\" returns successfully" May 17 00:21:59.928930 containerd[1455]: time="2025-05-17T00:21:59.928913834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mfhj5,Uid:e793e701-f5aa-4190-a1ec-13776ffa5239,Namespace:calico-system,Attempt:1,}" May 17 00:21:59.935695 systemd[1]: run-netns-cni\x2d3bb1f790\x2d1835\x2db240\x2d0c2f\x2df336771fb265.mount: Deactivated successfully. May 17 00:21:59.938986 containerd[1455]: 2025-05-17 00:21:59.896 [INFO][4328] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8" May 17 00:21:59.938986 containerd[1455]: 2025-05-17 00:21:59.896 [INFO][4328] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8" iface="eth0" netns="/var/run/netns/cni-9c4757c3-a4a9-5924-ce9f-fb291f16a99d" May 17 00:21:59.938986 containerd[1455]: 2025-05-17 00:21:59.897 [INFO][4328] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8" iface="eth0" netns="/var/run/netns/cni-9c4757c3-a4a9-5924-ce9f-fb291f16a99d" May 17 00:21:59.938986 containerd[1455]: 2025-05-17 00:21:59.897 [INFO][4328] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8" iface="eth0" netns="/var/run/netns/cni-9c4757c3-a4a9-5924-ce9f-fb291f16a99d" May 17 00:21:59.938986 containerd[1455]: 2025-05-17 00:21:59.897 [INFO][4328] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8" May 17 00:21:59.938986 containerd[1455]: 2025-05-17 00:21:59.897 [INFO][4328] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8" May 17 00:21:59.938986 containerd[1455]: 2025-05-17 00:21:59.919 [INFO][4346] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8" HandleID="k8s-pod-network.6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8" Workload="172--233--222--125-k8s-calico--apiserver--77f86bc66d--bzk7k-eth0" May 17 00:21:59.938986 containerd[1455]: 2025-05-17 00:21:59.919 [INFO][4346] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:21:59.938986 containerd[1455]: 2025-05-17 00:21:59.920 [INFO][4346] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:21:59.938986 containerd[1455]: 2025-05-17 00:21:59.931 [WARNING][4346] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8" HandleID="k8s-pod-network.6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8" Workload="172--233--222--125-k8s-calico--apiserver--77f86bc66d--bzk7k-eth0" May 17 00:21:59.938986 containerd[1455]: 2025-05-17 00:21:59.931 [INFO][4346] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8" HandleID="k8s-pod-network.6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8" Workload="172--233--222--125-k8s-calico--apiserver--77f86bc66d--bzk7k-eth0" May 17 00:21:59.938986 containerd[1455]: 2025-05-17 00:21:59.933 [INFO][4346] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:21:59.938986 containerd[1455]: 2025-05-17 00:21:59.935 [INFO][4328] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8" May 17 00:21:59.940787 systemd[1]: run-netns-cni\x2d9c4757c3\x2da4a9\x2d5924\x2dce9f\x2dfb291f16a99d.mount: Deactivated successfully. May 17 00:21:59.940958 containerd[1455]: time="2025-05-17T00:21:59.940889198Z" level=info msg="TearDown network for sandbox \"6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8\" successfully" May 17 00:21:59.940958 containerd[1455]: time="2025-05-17T00:21:59.940907668Z" level=info msg="StopPodSandbox for \"6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8\" returns successfully" May 17 00:21:59.941765 containerd[1455]: time="2025-05-17T00:21:59.941371017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77f86bc66d-bzk7k,Uid:ecdb2099-9206-4f56-bd2f-4d5b7338559a,Namespace:calico-apiserver,Attempt:1,}" May 17 00:21:59.956331 systemd-networkd[1381]: vxlan.calico: Gained IPv6LL May 17 00:21:59.986282 kubelet[2509]: E0517 00:21:59.986254 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:21:59.998918 kubelet[2509]: I0517 00:21:59.998536 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-q7w6q" podStartSLOduration=30.998520619 podStartE2EDuration="30.998520619s" podCreationTimestamp="2025-05-17 00:21:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:21:59.997470039 +0000 UTC m=+37.271097071" watchObservedRunningTime="2025-05-17 00:21:59.998520619 +0000 UTC m=+37.272147661" May 17 00:22:00.065372 systemd-networkd[1381]: calib7075275922: Link UP May 17 00:22:00.066060 systemd-networkd[1381]: calib7075275922: Gained carrier May 17 00:22:00.080254 containerd[1455]: 2025-05-17 00:21:59.983 [INFO][4365] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--233--222--125-k8s-calico--apiserver--77f86bc66d--bzk7k-eth0 calico-apiserver-77f86bc66d- calico-apiserver ecdb2099-9206-4f56-bd2f-4d5b7338559a 978 0 2025-05-17 00:21:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77f86bc66d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-233-222-125 calico-apiserver-77f86bc66d-bzk7k eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib7075275922 [] [] }} ContainerID="ce3196d2bd97fb2be4fb22cbc585bf5f12f52e7809b24155890c71fa6c41dcec" Namespace="calico-apiserver" Pod="calico-apiserver-77f86bc66d-bzk7k" WorkloadEndpoint="172--233--222--125-k8s-calico--apiserver--77f86bc66d--bzk7k-" May 17 00:22:00.080254 containerd[1455]: 2025-05-17 00:21:59.984 [INFO][4365] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ce3196d2bd97fb2be4fb22cbc585bf5f12f52e7809b24155890c71fa6c41dcec" Namespace="calico-apiserver" Pod="calico-apiserver-77f86bc66d-bzk7k" WorkloadEndpoint="172--233--222--125-k8s-calico--apiserver--77f86bc66d--bzk7k-eth0" May 17 00:22:00.080254 containerd[1455]: 2025-05-17 00:22:00.031 [INFO][4379] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ce3196d2bd97fb2be4fb22cbc585bf5f12f52e7809b24155890c71fa6c41dcec" HandleID="k8s-pod-network.ce3196d2bd97fb2be4fb22cbc585bf5f12f52e7809b24155890c71fa6c41dcec" Workload="172--233--222--125-k8s-calico--apiserver--77f86bc66d--bzk7k-eth0" May 17 00:22:00.080254 containerd[1455]: 2025-05-17 00:22:00.031 [INFO][4379] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ce3196d2bd97fb2be4fb22cbc585bf5f12f52e7809b24155890c71fa6c41dcec" HandleID="k8s-pod-network.ce3196d2bd97fb2be4fb22cbc585bf5f12f52e7809b24155890c71fa6c41dcec" Workload="172--233--222--125-k8s-calico--apiserver--77f86bc66d--bzk7k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000235630), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-233-222-125", "pod":"calico-apiserver-77f86bc66d-bzk7k", "timestamp":"2025-05-17 00:22:00.031818402 +0000 UTC"}, Hostname:"172-233-222-125", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:22:00.080254 containerd[1455]: 2025-05-17 00:22:00.032 [INFO][4379] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:00.080254 containerd[1455]: 2025-05-17 00:22:00.032 [INFO][4379] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:00.080254 containerd[1455]: 2025-05-17 00:22:00.032 [INFO][4379] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-233-222-125' May 17 00:22:00.080254 containerd[1455]: 2025-05-17 00:22:00.039 [INFO][4379] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ce3196d2bd97fb2be4fb22cbc585bf5f12f52e7809b24155890c71fa6c41dcec" host="172-233-222-125" May 17 00:22:00.080254 containerd[1455]: 2025-05-17 00:22:00.042 [INFO][4379] ipam/ipam.go 394: Looking up existing affinities for host host="172-233-222-125" May 17 00:22:00.080254 containerd[1455]: 2025-05-17 00:22:00.045 [INFO][4379] ipam/ipam.go 511: Trying affinity for 192.168.33.128/26 host="172-233-222-125" May 17 00:22:00.080254 containerd[1455]: 2025-05-17 00:22:00.046 [INFO][4379] ipam/ipam.go 158: Attempting to load block cidr=192.168.33.128/26 host="172-233-222-125" May 17 00:22:00.080254 containerd[1455]: 2025-05-17 00:22:00.048 [INFO][4379] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.33.128/26 host="172-233-222-125" May 17 00:22:00.080254 containerd[1455]: 2025-05-17 00:22:00.048 [INFO][4379] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.33.128/26 handle="k8s-pod-network.ce3196d2bd97fb2be4fb22cbc585bf5f12f52e7809b24155890c71fa6c41dcec" host="172-233-222-125" May 17 00:22:00.080254 containerd[1455]: 2025-05-17 00:22:00.050 [INFO][4379] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ce3196d2bd97fb2be4fb22cbc585bf5f12f52e7809b24155890c71fa6c41dcec May 17 00:22:00.080254 containerd[1455]: 2025-05-17 00:22:00.054 [INFO][4379] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.33.128/26 handle="k8s-pod-network.ce3196d2bd97fb2be4fb22cbc585bf5f12f52e7809b24155890c71fa6c41dcec" host="172-233-222-125" May 17 00:22:00.080254 containerd[1455]: 2025-05-17 00:22:00.057 [INFO][4379] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.33.131/26] block=192.168.33.128/26 handle="k8s-pod-network.ce3196d2bd97fb2be4fb22cbc585bf5f12f52e7809b24155890c71fa6c41dcec" host="172-233-222-125" May 17 00:22:00.080254 containerd[1455]: 2025-05-17 00:22:00.057 [INFO][4379] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.33.131/26] handle="k8s-pod-network.ce3196d2bd97fb2be4fb22cbc585bf5f12f52e7809b24155890c71fa6c41dcec" host="172-233-222-125" May 17 00:22:00.080254 containerd[1455]: 2025-05-17 00:22:00.057 [INFO][4379] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:00.080254 containerd[1455]: 2025-05-17 00:22:00.057 [INFO][4379] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.33.131/26] IPv6=[] ContainerID="ce3196d2bd97fb2be4fb22cbc585bf5f12f52e7809b24155890c71fa6c41dcec" HandleID="k8s-pod-network.ce3196d2bd97fb2be4fb22cbc585bf5f12f52e7809b24155890c71fa6c41dcec" Workload="172--233--222--125-k8s-calico--apiserver--77f86bc66d--bzk7k-eth0" May 17 00:22:00.080803 containerd[1455]: 2025-05-17 00:22:00.062 [INFO][4365] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ce3196d2bd97fb2be4fb22cbc585bf5f12f52e7809b24155890c71fa6c41dcec" Namespace="calico-apiserver" Pod="calico-apiserver-77f86bc66d-bzk7k" WorkloadEndpoint="172--233--222--125-k8s-calico--apiserver--77f86bc66d--bzk7k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--125-k8s-calico--apiserver--77f86bc66d--bzk7k-eth0", GenerateName:"calico-apiserver-77f86bc66d-", Namespace:"calico-apiserver", SelfLink:"", UID:"ecdb2099-9206-4f56-bd2f-4d5b7338559a", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77f86bc66d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-125", ContainerID:"", Pod:"calico-apiserver-77f86bc66d-bzk7k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib7075275922", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:00.080803 containerd[1455]: 2025-05-17 00:22:00.062 [INFO][4365] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.33.131/32] ContainerID="ce3196d2bd97fb2be4fb22cbc585bf5f12f52e7809b24155890c71fa6c41dcec" Namespace="calico-apiserver" Pod="calico-apiserver-77f86bc66d-bzk7k" WorkloadEndpoint="172--233--222--125-k8s-calico--apiserver--77f86bc66d--bzk7k-eth0" May 17 00:22:00.080803 containerd[1455]: 2025-05-17 00:22:00.062 [INFO][4365] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib7075275922 ContainerID="ce3196d2bd97fb2be4fb22cbc585bf5f12f52e7809b24155890c71fa6c41dcec" Namespace="calico-apiserver" Pod="calico-apiserver-77f86bc66d-bzk7k" WorkloadEndpoint="172--233--222--125-k8s-calico--apiserver--77f86bc66d--bzk7k-eth0" May 17 00:22:00.080803 containerd[1455]: 2025-05-17 00:22:00.066 [INFO][4365] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ce3196d2bd97fb2be4fb22cbc585bf5f12f52e7809b24155890c71fa6c41dcec" Namespace="calico-apiserver" Pod="calico-apiserver-77f86bc66d-bzk7k" WorkloadEndpoint="172--233--222--125-k8s-calico--apiserver--77f86bc66d--bzk7k-eth0" May 17 00:22:00.080803 containerd[1455]: 2025-05-17 00:22:00.068 [INFO][4365] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ce3196d2bd97fb2be4fb22cbc585bf5f12f52e7809b24155890c71fa6c41dcec" Namespace="calico-apiserver" Pod="calico-apiserver-77f86bc66d-bzk7k" WorkloadEndpoint="172--233--222--125-k8s-calico--apiserver--77f86bc66d--bzk7k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--125-k8s-calico--apiserver--77f86bc66d--bzk7k-eth0", GenerateName:"calico-apiserver-77f86bc66d-", Namespace:"calico-apiserver", SelfLink:"", UID:"ecdb2099-9206-4f56-bd2f-4d5b7338559a", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77f86bc66d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-125", ContainerID:"ce3196d2bd97fb2be4fb22cbc585bf5f12f52e7809b24155890c71fa6c41dcec", Pod:"calico-apiserver-77f86bc66d-bzk7k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib7075275922", MAC:"2a:b7:69:c2:58:d5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:00.080803 containerd[1455]: 2025-05-17 00:22:00.076 [INFO][4365] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ce3196d2bd97fb2be4fb22cbc585bf5f12f52e7809b24155890c71fa6c41dcec" Namespace="calico-apiserver" Pod="calico-apiserver-77f86bc66d-bzk7k" WorkloadEndpoint="172--233--222--125-k8s-calico--apiserver--77f86bc66d--bzk7k-eth0" May 17 00:22:00.100238 containerd[1455]: time="2025-05-17T00:22:00.099277109Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:22:00.100238 containerd[1455]: time="2025-05-17T00:22:00.099323719Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:22:00.100238 containerd[1455]: time="2025-05-17T00:22:00.099332198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:00.100238 containerd[1455]: time="2025-05-17T00:22:00.099399118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:00.116314 systemd[1]: Started cri-containerd-ce3196d2bd97fb2be4fb22cbc585bf5f12f52e7809b24155890c71fa6c41dcec.scope - libcontainer container ce3196d2bd97fb2be4fb22cbc585bf5f12f52e7809b24155890c71fa6c41dcec. May 17 00:22:00.153696 containerd[1455]: time="2025-05-17T00:22:00.153635451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77f86bc66d-bzk7k,Uid:ecdb2099-9206-4f56-bd2f-4d5b7338559a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"ce3196d2bd97fb2be4fb22cbc585bf5f12f52e7809b24155890c71fa6c41dcec\"" May 17 00:22:00.155885 containerd[1455]: time="2025-05-17T00:22:00.155729690Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 00:22:00.171683 systemd-networkd[1381]: calif23f5b3db6a: Link UP May 17 00:22:00.172804 systemd-networkd[1381]: calif23f5b3db6a: Gained carrier May 17 00:22:00.189602 containerd[1455]: 2025-05-17 00:21:59.981 [INFO][4355] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--233--222--125-k8s-csi--node--driver--mfhj5-eth0 csi-node-driver- calico-system e793e701-f5aa-4190-a1ec-13776ffa5239 977 0 2025-05-17 00:21:42 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78f6f74485 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-233-222-125 csi-node-driver-mfhj5 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif23f5b3db6a [] [] }} ContainerID="85120c538511467f4dac0433780ced366a18dce0f731c9beaac436a0a907d317" Namespace="calico-system" Pod="csi-node-driver-mfhj5" WorkloadEndpoint="172--233--222--125-k8s-csi--node--driver--mfhj5-" May 17 00:22:00.189602 containerd[1455]: 2025-05-17 00:21:59.982 [INFO][4355] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="85120c538511467f4dac0433780ced366a18dce0f731c9beaac436a0a907d317" Namespace="calico-system" Pod="csi-node-driver-mfhj5" WorkloadEndpoint="172--233--222--125-k8s-csi--node--driver--mfhj5-eth0" May 17 00:22:00.189602 containerd[1455]: 2025-05-17 00:22:00.034 [INFO][4384] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="85120c538511467f4dac0433780ced366a18dce0f731c9beaac436a0a907d317" HandleID="k8s-pod-network.85120c538511467f4dac0433780ced366a18dce0f731c9beaac436a0a907d317" Workload="172--233--222--125-k8s-csi--node--driver--mfhj5-eth0" May 17 00:22:00.189602 containerd[1455]: 2025-05-17 00:22:00.035 [INFO][4384] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="85120c538511467f4dac0433780ced366a18dce0f731c9beaac436a0a907d317" HandleID="k8s-pod-network.85120c538511467f4dac0433780ced366a18dce0f731c9beaac436a0a907d317" Workload="172--233--222--125-k8s-csi--node--driver--mfhj5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9020), Attrs:map[string]string{"namespace":"calico-system", "node":"172-233-222-125", "pod":"csi-node-driver-mfhj5", "timestamp":"2025-05-17 00:22:00.034770601 +0000 UTC"}, Hostname:"172-233-222-125", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:22:00.189602 containerd[1455]: 2025-05-17 00:22:00.035 [INFO][4384] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:00.189602 containerd[1455]: 2025-05-17 00:22:00.058 [INFO][4384] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:00.189602 containerd[1455]: 2025-05-17 00:22:00.058 [INFO][4384] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-233-222-125' May 17 00:22:00.189602 containerd[1455]: 2025-05-17 00:22:00.139 [INFO][4384] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.85120c538511467f4dac0433780ced366a18dce0f731c9beaac436a0a907d317" host="172-233-222-125" May 17 00:22:00.189602 containerd[1455]: 2025-05-17 00:22:00.143 [INFO][4384] ipam/ipam.go 394: Looking up existing affinities for host host="172-233-222-125" May 17 00:22:00.189602 containerd[1455]: 2025-05-17 00:22:00.147 [INFO][4384] ipam/ipam.go 511: Trying affinity for 192.168.33.128/26 host="172-233-222-125" May 17 00:22:00.189602 containerd[1455]: 2025-05-17 00:22:00.148 [INFO][4384] ipam/ipam.go 158: Attempting to load block cidr=192.168.33.128/26 host="172-233-222-125" May 17 00:22:00.189602 containerd[1455]: 2025-05-17 00:22:00.151 [INFO][4384] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.33.128/26 host="172-233-222-125" May 17 00:22:00.189602 containerd[1455]: 2025-05-17 00:22:00.151 [INFO][4384] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.33.128/26 handle="k8s-pod-network.85120c538511467f4dac0433780ced366a18dce0f731c9beaac436a0a907d317" host="172-233-222-125" May 17 00:22:00.189602 containerd[1455]: 2025-05-17 00:22:00.152 [INFO][4384] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.85120c538511467f4dac0433780ced366a18dce0f731c9beaac436a0a907d317 May 17 00:22:00.189602 containerd[1455]: 2025-05-17 00:22:00.157 [INFO][4384] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.33.128/26 handle="k8s-pod-network.85120c538511467f4dac0433780ced366a18dce0f731c9beaac436a0a907d317" host="172-233-222-125" May 17 00:22:00.189602 containerd[1455]: 2025-05-17 00:22:00.162 [INFO][4384] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.33.132/26] block=192.168.33.128/26 handle="k8s-pod-network.85120c538511467f4dac0433780ced366a18dce0f731c9beaac436a0a907d317" host="172-233-222-125" May 17 00:22:00.189602 containerd[1455]: 2025-05-17 00:22:00.162 [INFO][4384] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.33.132/26] handle="k8s-pod-network.85120c538511467f4dac0433780ced366a18dce0f731c9beaac436a0a907d317" host="172-233-222-125" May 17 00:22:00.189602 containerd[1455]: 2025-05-17 00:22:00.162 [INFO][4384] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:00.189602 containerd[1455]: 2025-05-17 00:22:00.162 [INFO][4384] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.33.132/26] IPv6=[] ContainerID="85120c538511467f4dac0433780ced366a18dce0f731c9beaac436a0a907d317" HandleID="k8s-pod-network.85120c538511467f4dac0433780ced366a18dce0f731c9beaac436a0a907d317" Workload="172--233--222--125-k8s-csi--node--driver--mfhj5-eth0" May 17 00:22:00.190157 containerd[1455]: 2025-05-17 00:22:00.165 [INFO][4355] cni-plugin/k8s.go 418: Populated endpoint ContainerID="85120c538511467f4dac0433780ced366a18dce0f731c9beaac436a0a907d317" Namespace="calico-system" Pod="csi-node-driver-mfhj5" WorkloadEndpoint="172--233--222--125-k8s-csi--node--driver--mfhj5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--125-k8s-csi--node--driver--mfhj5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e793e701-f5aa-4190-a1ec-13776ffa5239", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-125", ContainerID:"", Pod:"csi-node-driver-mfhj5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.33.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif23f5b3db6a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:00.190157 containerd[1455]: 2025-05-17 00:22:00.165 [INFO][4355] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.33.132/32] ContainerID="85120c538511467f4dac0433780ced366a18dce0f731c9beaac436a0a907d317" Namespace="calico-system" Pod="csi-node-driver-mfhj5" WorkloadEndpoint="172--233--222--125-k8s-csi--node--driver--mfhj5-eth0" May 17 00:22:00.190157 containerd[1455]: 2025-05-17 00:22:00.166 [INFO][4355] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif23f5b3db6a ContainerID="85120c538511467f4dac0433780ced366a18dce0f731c9beaac436a0a907d317" Namespace="calico-system" Pod="csi-node-driver-mfhj5" WorkloadEndpoint="172--233--222--125-k8s-csi--node--driver--mfhj5-eth0" May 17 00:22:00.190157 containerd[1455]: 2025-05-17 00:22:00.174 [INFO][4355] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="85120c538511467f4dac0433780ced366a18dce0f731c9beaac436a0a907d317" Namespace="calico-system" Pod="csi-node-driver-mfhj5" WorkloadEndpoint="172--233--222--125-k8s-csi--node--driver--mfhj5-eth0" May 17 00:22:00.190157 containerd[1455]: 2025-05-17 00:22:00.174 [INFO][4355] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="85120c538511467f4dac0433780ced366a18dce0f731c9beaac436a0a907d317" Namespace="calico-system" Pod="csi-node-driver-mfhj5" WorkloadEndpoint="172--233--222--125-k8s-csi--node--driver--mfhj5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--125-k8s-csi--node--driver--mfhj5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e793e701-f5aa-4190-a1ec-13776ffa5239", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-125", ContainerID:"85120c538511467f4dac0433780ced366a18dce0f731c9beaac436a0a907d317", Pod:"csi-node-driver-mfhj5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.33.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif23f5b3db6a", MAC:"22:c6:9d:de:05:bb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:00.190157 containerd[1455]: 2025-05-17 00:22:00.185 [INFO][4355] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="85120c538511467f4dac0433780ced366a18dce0f731c9beaac436a0a907d317" Namespace="calico-system" Pod="csi-node-driver-mfhj5" WorkloadEndpoint="172--233--222--125-k8s-csi--node--driver--mfhj5-eth0" May 17 00:22:00.208574 containerd[1455]: time="2025-05-17T00:22:00.204676806Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:22:00.208574 containerd[1455]: time="2025-05-17T00:22:00.205448855Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:22:00.208574 containerd[1455]: time="2025-05-17T00:22:00.205462275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:00.208574 containerd[1455]: time="2025-05-17T00:22:00.205528665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:00.220287 systemd[1]: Started cri-containerd-85120c538511467f4dac0433780ced366a18dce0f731c9beaac436a0a907d317.scope - libcontainer container 85120c538511467f4dac0433780ced366a18dce0f731c9beaac436a0a907d317. May 17 00:22:00.239652 containerd[1455]: time="2025-05-17T00:22:00.239605188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mfhj5,Uid:e793e701-f5aa-4190-a1ec-13776ffa5239,Namespace:calico-system,Attempt:1,} returns sandbox id \"85120c538511467f4dac0433780ced366a18dce0f731c9beaac436a0a907d317\"" May 17 00:22:00.596434 systemd-networkd[1381]: cali104398033cc: Gained IPv6LL May 17 00:22:00.841665 containerd[1455]: time="2025-05-17T00:22:00.841618177Z" level=info msg="StopPodSandbox for \"cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502\"" May 17 00:22:00.918375 containerd[1455]: 2025-05-17 00:22:00.881 [INFO][4509] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502" May 17 00:22:00.918375 containerd[1455]: 2025-05-17 00:22:00.882 [INFO][4509] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502" iface="eth0" netns="/var/run/netns/cni-a998e869-1d53-4597-3318-1cb7c6d9d46b" May 17 00:22:00.918375 containerd[1455]: 2025-05-17 00:22:00.882 [INFO][4509] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502" iface="eth0" netns="/var/run/netns/cni-a998e869-1d53-4597-3318-1cb7c6d9d46b" May 17 00:22:00.918375 containerd[1455]: 2025-05-17 00:22:00.883 [INFO][4509] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502" iface="eth0" netns="/var/run/netns/cni-a998e869-1d53-4597-3318-1cb7c6d9d46b" May 17 00:22:00.918375 containerd[1455]: 2025-05-17 00:22:00.883 [INFO][4509] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502" May 17 00:22:00.918375 containerd[1455]: 2025-05-17 00:22:00.883 [INFO][4509] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502" May 17 00:22:00.918375 containerd[1455]: 2025-05-17 00:22:00.905 [INFO][4517] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502" HandleID="k8s-pod-network.cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502" Workload="172--233--222--125-k8s-calico--apiserver--77f86bc66d--9ctqn-eth0" May 17 00:22:00.918375 containerd[1455]: 2025-05-17 00:22:00.905 [INFO][4517] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:00.918375 containerd[1455]: 2025-05-17 00:22:00.905 [INFO][4517] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:00.918375 containerd[1455]: 2025-05-17 00:22:00.911 [WARNING][4517] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502" HandleID="k8s-pod-network.cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502" Workload="172--233--222--125-k8s-calico--apiserver--77f86bc66d--9ctqn-eth0" May 17 00:22:00.918375 containerd[1455]: 2025-05-17 00:22:00.911 [INFO][4517] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502" HandleID="k8s-pod-network.cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502" Workload="172--233--222--125-k8s-calico--apiserver--77f86bc66d--9ctqn-eth0" May 17 00:22:00.918375 containerd[1455]: 2025-05-17 00:22:00.913 [INFO][4517] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:00.918375 containerd[1455]: 2025-05-17 00:22:00.915 [INFO][4509] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502" May 17 00:22:00.919097 containerd[1455]: time="2025-05-17T00:22:00.918566589Z" level=info msg="TearDown network for sandbox \"cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502\" successfully" May 17 00:22:00.919097 containerd[1455]: time="2025-05-17T00:22:00.918594259Z" level=info msg="StopPodSandbox for \"cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502\" returns successfully" May 17 00:22:00.919660 containerd[1455]: time="2025-05-17T00:22:00.919320368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77f86bc66d-9ctqn,Uid:708b28d1-b868-4f8b-b7c9-b5fa6b493a92,Namespace:calico-apiserver,Attempt:1,}" May 17 00:22:00.931932 systemd[1]: run-netns-cni\x2da998e869\x2d1d53\x2d4597\x2d3318\x2d1cb7c6d9d46b.mount: Deactivated successfully. May 17 00:22:00.991827 kubelet[2509]: E0517 00:22:00.991790 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:22:01.011664 systemd-networkd[1381]: calicb6bdaf8b50: Link UP May 17 00:22:01.014559 systemd-networkd[1381]: calicb6bdaf8b50: Gained carrier May 17 00:22:01.026244 containerd[1455]: 2025-05-17 00:22:00.957 [INFO][4525] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--233--222--125-k8s-calico--apiserver--77f86bc66d--9ctqn-eth0 calico-apiserver-77f86bc66d- calico-apiserver 708b28d1-b868-4f8b-b7c9-b5fa6b493a92 999 0 2025-05-17 00:21:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77f86bc66d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-233-222-125 calico-apiserver-77f86bc66d-9ctqn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calicb6bdaf8b50 [] [] }} ContainerID="589ccdba7d3f1fb0a5df1bf347024b9737eab1218a64d690193e1d448ef1c6a9" Namespace="calico-apiserver" Pod="calico-apiserver-77f86bc66d-9ctqn" WorkloadEndpoint="172--233--222--125-k8s-calico--apiserver--77f86bc66d--9ctqn-" May 17 00:22:01.026244 containerd[1455]: 2025-05-17 00:22:00.963 [INFO][4525] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="589ccdba7d3f1fb0a5df1bf347024b9737eab1218a64d690193e1d448ef1c6a9" Namespace="calico-apiserver" Pod="calico-apiserver-77f86bc66d-9ctqn" WorkloadEndpoint="172--233--222--125-k8s-calico--apiserver--77f86bc66d--9ctqn-eth0" May 17 00:22:01.026244 containerd[1455]: 2025-05-17 00:22:00.981 [INFO][4536] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="589ccdba7d3f1fb0a5df1bf347024b9737eab1218a64d690193e1d448ef1c6a9" HandleID="k8s-pod-network.589ccdba7d3f1fb0a5df1bf347024b9737eab1218a64d690193e1d448ef1c6a9" Workload="172--233--222--125-k8s-calico--apiserver--77f86bc66d--9ctqn-eth0" May 17 00:22:01.026244 containerd[1455]: 2025-05-17 00:22:00.981 [INFO][4536] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="589ccdba7d3f1fb0a5df1bf347024b9737eab1218a64d690193e1d448ef1c6a9" HandleID="k8s-pod-network.589ccdba7d3f1fb0a5df1bf347024b9737eab1218a64d690193e1d448ef1c6a9" Workload="172--233--222--125-k8s-calico--apiserver--77f86bc66d--9ctqn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9630), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-233-222-125", "pod":"calico-apiserver-77f86bc66d-9ctqn", "timestamp":"2025-05-17 00:22:00.981350117 +0000 UTC"}, Hostname:"172-233-222-125", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:22:01.026244 containerd[1455]: 2025-05-17 00:22:00.981 [INFO][4536] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:01.026244 containerd[1455]: 2025-05-17 00:22:00.981 [INFO][4536] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:01.026244 containerd[1455]: 2025-05-17 00:22:00.981 [INFO][4536] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-233-222-125' May 17 00:22:01.026244 containerd[1455]: 2025-05-17 00:22:00.986 [INFO][4536] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.589ccdba7d3f1fb0a5df1bf347024b9737eab1218a64d690193e1d448ef1c6a9" host="172-233-222-125" May 17 00:22:01.026244 containerd[1455]: 2025-05-17 00:22:00.989 [INFO][4536] ipam/ipam.go 394: Looking up existing affinities for host host="172-233-222-125" May 17 00:22:01.026244 containerd[1455]: 2025-05-17 00:22:00.992 [INFO][4536] ipam/ipam.go 511: Trying affinity for 192.168.33.128/26 host="172-233-222-125" May 17 00:22:01.026244 containerd[1455]: 2025-05-17 00:22:00.994 [INFO][4536] ipam/ipam.go 158: Attempting to load block cidr=192.168.33.128/26 host="172-233-222-125" May 17 00:22:01.026244 containerd[1455]: 2025-05-17 00:22:00.995 [INFO][4536] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.33.128/26 host="172-233-222-125" May 17 00:22:01.026244 containerd[1455]: 2025-05-17 00:22:00.996 [INFO][4536] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.33.128/26 handle="k8s-pod-network.589ccdba7d3f1fb0a5df1bf347024b9737eab1218a64d690193e1d448ef1c6a9" host="172-233-222-125" May 17 00:22:01.026244 containerd[1455]: 2025-05-17 00:22:00.996 [INFO][4536] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.589ccdba7d3f1fb0a5df1bf347024b9737eab1218a64d690193e1d448ef1c6a9 May 17 00:22:01.026244 containerd[1455]: 2025-05-17 00:22:00.999 [INFO][4536] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.33.128/26 handle="k8s-pod-network.589ccdba7d3f1fb0a5df1bf347024b9737eab1218a64d690193e1d448ef1c6a9" host="172-233-222-125" May 17 00:22:01.026244 containerd[1455]: 2025-05-17 00:22:01.005 [INFO][4536] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.33.133/26] block=192.168.33.128/26 handle="k8s-pod-network.589ccdba7d3f1fb0a5df1bf347024b9737eab1218a64d690193e1d448ef1c6a9" host="172-233-222-125" May 17 00:22:01.026244 containerd[1455]: 2025-05-17 00:22:01.005 [INFO][4536] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.33.133/26] handle="k8s-pod-network.589ccdba7d3f1fb0a5df1bf347024b9737eab1218a64d690193e1d448ef1c6a9" host="172-233-222-125" May 17 00:22:01.026244 containerd[1455]: 2025-05-17 00:22:01.005 [INFO][4536] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:01.026244 containerd[1455]: 2025-05-17 00:22:01.005 [INFO][4536] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.33.133/26] IPv6=[] ContainerID="589ccdba7d3f1fb0a5df1bf347024b9737eab1218a64d690193e1d448ef1c6a9" HandleID="k8s-pod-network.589ccdba7d3f1fb0a5df1bf347024b9737eab1218a64d690193e1d448ef1c6a9" Workload="172--233--222--125-k8s-calico--apiserver--77f86bc66d--9ctqn-eth0" May 17 00:22:01.026713 containerd[1455]: 2025-05-17 00:22:01.008 [INFO][4525] cni-plugin/k8s.go 418: Populated endpoint ContainerID="589ccdba7d3f1fb0a5df1bf347024b9737eab1218a64d690193e1d448ef1c6a9" Namespace="calico-apiserver" Pod="calico-apiserver-77f86bc66d-9ctqn" WorkloadEndpoint="172--233--222--125-k8s-calico--apiserver--77f86bc66d--9ctqn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--125-k8s-calico--apiserver--77f86bc66d--9ctqn-eth0", GenerateName:"calico-apiserver-77f86bc66d-", Namespace:"calico-apiserver", SelfLink:"", UID:"708b28d1-b868-4f8b-b7c9-b5fa6b493a92", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77f86bc66d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-125", ContainerID:"", Pod:"calico-apiserver-77f86bc66d-9ctqn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicb6bdaf8b50", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:01.026713 containerd[1455]: 2025-05-17 00:22:01.008 [INFO][4525] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.33.133/32] ContainerID="589ccdba7d3f1fb0a5df1bf347024b9737eab1218a64d690193e1d448ef1c6a9" Namespace="calico-apiserver" Pod="calico-apiserver-77f86bc66d-9ctqn" WorkloadEndpoint="172--233--222--125-k8s-calico--apiserver--77f86bc66d--9ctqn-eth0" May 17 00:22:01.026713 containerd[1455]: 2025-05-17 00:22:01.008 [INFO][4525] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicb6bdaf8b50 ContainerID="589ccdba7d3f1fb0a5df1bf347024b9737eab1218a64d690193e1d448ef1c6a9" Namespace="calico-apiserver" Pod="calico-apiserver-77f86bc66d-9ctqn" WorkloadEndpoint="172--233--222--125-k8s-calico--apiserver--77f86bc66d--9ctqn-eth0" May 17 00:22:01.026713 containerd[1455]: 2025-05-17 00:22:01.014 [INFO][4525] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="589ccdba7d3f1fb0a5df1bf347024b9737eab1218a64d690193e1d448ef1c6a9" Namespace="calico-apiserver" Pod="calico-apiserver-77f86bc66d-9ctqn" WorkloadEndpoint="172--233--222--125-k8s-calico--apiserver--77f86bc66d--9ctqn-eth0" May 17 00:22:01.026713 containerd[1455]: 2025-05-17 00:22:01.015 [INFO][4525] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="589ccdba7d3f1fb0a5df1bf347024b9737eab1218a64d690193e1d448ef1c6a9" Namespace="calico-apiserver" Pod="calico-apiserver-77f86bc66d-9ctqn" WorkloadEndpoint="172--233--222--125-k8s-calico--apiserver--77f86bc66d--9ctqn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--125-k8s-calico--apiserver--77f86bc66d--9ctqn-eth0", GenerateName:"calico-apiserver-77f86bc66d-", Namespace:"calico-apiserver", SelfLink:"", UID:"708b28d1-b868-4f8b-b7c9-b5fa6b493a92", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77f86bc66d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-125", ContainerID:"589ccdba7d3f1fb0a5df1bf347024b9737eab1218a64d690193e1d448ef1c6a9", Pod:"calico-apiserver-77f86bc66d-9ctqn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicb6bdaf8b50", MAC:"36:fe:51:41:0e:a7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:01.026713 containerd[1455]: 2025-05-17 00:22:01.022 [INFO][4525] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="589ccdba7d3f1fb0a5df1bf347024b9737eab1218a64d690193e1d448ef1c6a9" Namespace="calico-apiserver" Pod="calico-apiserver-77f86bc66d-9ctqn" WorkloadEndpoint="172--233--222--125-k8s-calico--apiserver--77f86bc66d--9ctqn-eth0" May 17 00:22:01.043792 containerd[1455]: time="2025-05-17T00:22:01.043004207Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:22:01.043792 containerd[1455]: time="2025-05-17T00:22:01.043304976Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:22:01.043792 containerd[1455]: time="2025-05-17T00:22:01.043314056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:01.043792 containerd[1455]: time="2025-05-17T00:22:01.043578776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:01.067286 systemd[1]: Started cri-containerd-589ccdba7d3f1fb0a5df1bf347024b9737eab1218a64d690193e1d448ef1c6a9.scope - libcontainer container 589ccdba7d3f1fb0a5df1bf347024b9737eab1218a64d690193e1d448ef1c6a9. May 17 00:22:01.101745 containerd[1455]: time="2025-05-17T00:22:01.101719257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77f86bc66d-9ctqn,Uid:708b28d1-b868-4f8b-b7c9-b5fa6b493a92,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"589ccdba7d3f1fb0a5df1bf347024b9737eab1218a64d690193e1d448ef1c6a9\"" May 17 00:22:01.428451 systemd-networkd[1381]: calib7075275922: Gained IPv6LL May 17 00:22:01.556422 systemd-networkd[1381]: calif23f5b3db6a: Gained IPv6LL May 17 00:22:01.841742 containerd[1455]: time="2025-05-17T00:22:01.841395867Z" level=info msg="StopPodSandbox for \"2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3\"" May 17 00:22:01.951801 containerd[1455]: 2025-05-17 00:22:01.910 [INFO][4606] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3" May 17 00:22:01.951801 containerd[1455]: 2025-05-17 00:22:01.910 [INFO][4606] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3" iface="eth0" netns="/var/run/netns/cni-b2817c39-e2cd-272a-7f66-d9fc81567e8b" May 17 00:22:01.951801 containerd[1455]: 2025-05-17 00:22:01.910 [INFO][4606] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3" iface="eth0" netns="/var/run/netns/cni-b2817c39-e2cd-272a-7f66-d9fc81567e8b" May 17 00:22:01.951801 containerd[1455]: 2025-05-17 00:22:01.910 [INFO][4606] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3" iface="eth0" netns="/var/run/netns/cni-b2817c39-e2cd-272a-7f66-d9fc81567e8b" May 17 00:22:01.951801 containerd[1455]: 2025-05-17 00:22:01.910 [INFO][4606] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3" May 17 00:22:01.951801 containerd[1455]: 2025-05-17 00:22:01.910 [INFO][4606] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3" May 17 00:22:01.951801 containerd[1455]: 2025-05-17 00:22:01.938 [INFO][4613] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3" HandleID="k8s-pod-network.2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3" Workload="172--233--222--125-k8s-calico--kube--controllers--7998fc854--4sfsk-eth0" May 17 00:22:01.951801 containerd[1455]: 2025-05-17 00:22:01.938 [INFO][4613] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:01.951801 containerd[1455]: 2025-05-17 00:22:01.938 [INFO][4613] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:01.951801 containerd[1455]: 2025-05-17 00:22:01.945 [WARNING][4613] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3" HandleID="k8s-pod-network.2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3" Workload="172--233--222--125-k8s-calico--kube--controllers--7998fc854--4sfsk-eth0" May 17 00:22:01.951801 containerd[1455]: 2025-05-17 00:22:01.945 [INFO][4613] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3" HandleID="k8s-pod-network.2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3" Workload="172--233--222--125-k8s-calico--kube--controllers--7998fc854--4sfsk-eth0" May 17 00:22:01.951801 containerd[1455]: 2025-05-17 00:22:01.947 [INFO][4613] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:01.951801 containerd[1455]: 2025-05-17 00:22:01.949 [INFO][4606] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3" May 17 00:22:01.954018 systemd[1]: run-netns-cni\x2db2817c39\x2de2cd\x2d272a\x2d7f66\x2dd9fc81567e8b.mount: Deactivated successfully. May 17 00:22:01.955357 containerd[1455]: time="2025-05-17T00:22:01.954477561Z" level=info msg="TearDown network for sandbox \"2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3\" successfully" May 17 00:22:01.955357 containerd[1455]: time="2025-05-17T00:22:01.954504411Z" level=info msg="StopPodSandbox for \"2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3\" returns successfully" May 17 00:22:01.956451 containerd[1455]: time="2025-05-17T00:22:01.956434250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7998fc854-4sfsk,Uid:66e07ccd-2dbe-42fe-bd10-e349fb811eb6,Namespace:calico-system,Attempt:1,}" May 17 00:22:02.003824 kubelet[2509]: E0517 00:22:02.003726 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:22:02.062679 systemd-networkd[1381]: cali71c6626e5e0: Link UP May 17 00:22:02.063977 systemd-networkd[1381]: cali71c6626e5e0: Gained carrier May 17 00:22:02.080392 containerd[1455]: 2025-05-17 00:22:02.007 [INFO][4619] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--233--222--125-k8s-calico--kube--controllers--7998fc854--4sfsk-eth0 calico-kube-controllers-7998fc854- calico-system 66e07ccd-2dbe-42fe-bd10-e349fb811eb6 1009 0 2025-05-17 00:21:42 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7998fc854 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-233-222-125 calico-kube-controllers-7998fc854-4sfsk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali71c6626e5e0 [] [] }} ContainerID="1430ef68df624e624de6fbf1f06f1d1de0f620a2b4f1da2ed731065d48dfc44c" Namespace="calico-system" Pod="calico-kube-controllers-7998fc854-4sfsk" WorkloadEndpoint="172--233--222--125-k8s-calico--kube--controllers--7998fc854--4sfsk-" May 17 00:22:02.080392 containerd[1455]: 2025-05-17 00:22:02.008 [INFO][4619] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1430ef68df624e624de6fbf1f06f1d1de0f620a2b4f1da2ed731065d48dfc44c" Namespace="calico-system" Pod="calico-kube-controllers-7998fc854-4sfsk" WorkloadEndpoint="172--233--222--125-k8s-calico--kube--controllers--7998fc854--4sfsk-eth0" May 17 00:22:02.080392 containerd[1455]: 2025-05-17 00:22:02.032 [INFO][4632] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1430ef68df624e624de6fbf1f06f1d1de0f620a2b4f1da2ed731065d48dfc44c" HandleID="k8s-pod-network.1430ef68df624e624de6fbf1f06f1d1de0f620a2b4f1da2ed731065d48dfc44c" Workload="172--233--222--125-k8s-calico--kube--controllers--7998fc854--4sfsk-eth0" May 17 00:22:02.080392 containerd[1455]: 2025-05-17 00:22:02.032 [INFO][4632] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1430ef68df624e624de6fbf1f06f1d1de0f620a2b4f1da2ed731065d48dfc44c" HandleID="k8s-pod-network.1430ef68df624e624de6fbf1f06f1d1de0f620a2b4f1da2ed731065d48dfc44c" Workload="172--233--222--125-k8s-calico--kube--controllers--7998fc854--4sfsk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000235260), Attrs:map[string]string{"namespace":"calico-system", "node":"172-233-222-125", "pod":"calico-kube-controllers-7998fc854-4sfsk", "timestamp":"2025-05-17 00:22:02.032038872 +0000 UTC"}, Hostname:"172-233-222-125", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:22:02.080392 containerd[1455]: 2025-05-17 00:22:02.032 [INFO][4632] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:02.080392 containerd[1455]: 2025-05-17 00:22:02.032 [INFO][4632] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:02.080392 containerd[1455]: 2025-05-17 00:22:02.032 [INFO][4632] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-233-222-125' May 17 00:22:02.080392 containerd[1455]: 2025-05-17 00:22:02.037 [INFO][4632] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1430ef68df624e624de6fbf1f06f1d1de0f620a2b4f1da2ed731065d48dfc44c" host="172-233-222-125" May 17 00:22:02.080392 containerd[1455]: 2025-05-17 00:22:02.040 [INFO][4632] ipam/ipam.go 394: Looking up existing affinities for host host="172-233-222-125" May 17 00:22:02.080392 containerd[1455]: 2025-05-17 00:22:02.044 [INFO][4632] ipam/ipam.go 511: Trying affinity for 192.168.33.128/26 host="172-233-222-125" May 17 00:22:02.080392 containerd[1455]: 2025-05-17 00:22:02.045 [INFO][4632] ipam/ipam.go 158: Attempting to load block cidr=192.168.33.128/26 host="172-233-222-125" May 17 00:22:02.080392 containerd[1455]: 2025-05-17 00:22:02.046 [INFO][4632] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.33.128/26 host="172-233-222-125" May 17 00:22:02.080392 containerd[1455]: 2025-05-17 00:22:02.046 [INFO][4632] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.33.128/26 handle="k8s-pod-network.1430ef68df624e624de6fbf1f06f1d1de0f620a2b4f1da2ed731065d48dfc44c" host="172-233-222-125" May 17 00:22:02.080392 containerd[1455]: 2025-05-17 00:22:02.049 [INFO][4632] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1430ef68df624e624de6fbf1f06f1d1de0f620a2b4f1da2ed731065d48dfc44c May 17 00:22:02.080392 containerd[1455]: 2025-05-17 00:22:02.052 [INFO][4632] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.33.128/26 handle="k8s-pod-network.1430ef68df624e624de6fbf1f06f1d1de0f620a2b4f1da2ed731065d48dfc44c" host="172-233-222-125" May 17 00:22:02.080392 containerd[1455]: 2025-05-17 00:22:02.055 [INFO][4632] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.33.134/26] block=192.168.33.128/26 handle="k8s-pod-network.1430ef68df624e624de6fbf1f06f1d1de0f620a2b4f1da2ed731065d48dfc44c" host="172-233-222-125" May 17 00:22:02.080392 containerd[1455]: 2025-05-17 00:22:02.055 [INFO][4632] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.33.134/26] handle="k8s-pod-network.1430ef68df624e624de6fbf1f06f1d1de0f620a2b4f1da2ed731065d48dfc44c" host="172-233-222-125" May 17 00:22:02.080392 containerd[1455]: 2025-05-17 00:22:02.055 [INFO][4632] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:02.080392 containerd[1455]: 2025-05-17 00:22:02.055 [INFO][4632] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.33.134/26] IPv6=[] ContainerID="1430ef68df624e624de6fbf1f06f1d1de0f620a2b4f1da2ed731065d48dfc44c" HandleID="k8s-pod-network.1430ef68df624e624de6fbf1f06f1d1de0f620a2b4f1da2ed731065d48dfc44c" Workload="172--233--222--125-k8s-calico--kube--controllers--7998fc854--4sfsk-eth0" May 17 00:22:02.080860 containerd[1455]: 2025-05-17 00:22:02.059 [INFO][4619] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1430ef68df624e624de6fbf1f06f1d1de0f620a2b4f1da2ed731065d48dfc44c" Namespace="calico-system" Pod="calico-kube-controllers-7998fc854-4sfsk" WorkloadEndpoint="172--233--222--125-k8s-calico--kube--controllers--7998fc854--4sfsk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--125-k8s-calico--kube--controllers--7998fc854--4sfsk-eth0", GenerateName:"calico-kube-controllers-7998fc854-", Namespace:"calico-system", SelfLink:"", UID:"66e07ccd-2dbe-42fe-bd10-e349fb811eb6", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7998fc854", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-125", ContainerID:"", Pod:"calico-kube-controllers-7998fc854-4sfsk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.33.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali71c6626e5e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:02.080860 containerd[1455]: 2025-05-17 00:22:02.059 [INFO][4619] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.33.134/32] ContainerID="1430ef68df624e624de6fbf1f06f1d1de0f620a2b4f1da2ed731065d48dfc44c" Namespace="calico-system" Pod="calico-kube-controllers-7998fc854-4sfsk" WorkloadEndpoint="172--233--222--125-k8s-calico--kube--controllers--7998fc854--4sfsk-eth0" May 17 00:22:02.080860 containerd[1455]: 2025-05-17 00:22:02.059 [INFO][4619] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali71c6626e5e0 ContainerID="1430ef68df624e624de6fbf1f06f1d1de0f620a2b4f1da2ed731065d48dfc44c" Namespace="calico-system" Pod="calico-kube-controllers-7998fc854-4sfsk" WorkloadEndpoint="172--233--222--125-k8s-calico--kube--controllers--7998fc854--4sfsk-eth0" May 17 00:22:02.080860 containerd[1455]: 2025-05-17 00:22:02.064 [INFO][4619] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1430ef68df624e624de6fbf1f06f1d1de0f620a2b4f1da2ed731065d48dfc44c" Namespace="calico-system" Pod="calico-kube-controllers-7998fc854-4sfsk" WorkloadEndpoint="172--233--222--125-k8s-calico--kube--controllers--7998fc854--4sfsk-eth0" May 17 00:22:02.080860 containerd[1455]: 2025-05-17 00:22:02.064 [INFO][4619] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1430ef68df624e624de6fbf1f06f1d1de0f620a2b4f1da2ed731065d48dfc44c" Namespace="calico-system" Pod="calico-kube-controllers-7998fc854-4sfsk" WorkloadEndpoint="172--233--222--125-k8s-calico--kube--controllers--7998fc854--4sfsk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--125-k8s-calico--kube--controllers--7998fc854--4sfsk-eth0", GenerateName:"calico-kube-controllers-7998fc854-", Namespace:"calico-system", SelfLink:"", UID:"66e07ccd-2dbe-42fe-bd10-e349fb811eb6", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7998fc854", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-125", ContainerID:"1430ef68df624e624de6fbf1f06f1d1de0f620a2b4f1da2ed731065d48dfc44c", Pod:"calico-kube-controllers-7998fc854-4sfsk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.33.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali71c6626e5e0", MAC:"62:9b:73:1b:88:7d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:02.080860 containerd[1455]: 2025-05-17 00:22:02.074 [INFO][4619] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1430ef68df624e624de6fbf1f06f1d1de0f620a2b4f1da2ed731065d48dfc44c" Namespace="calico-system" Pod="calico-kube-controllers-7998fc854-4sfsk" WorkloadEndpoint="172--233--222--125-k8s-calico--kube--controllers--7998fc854--4sfsk-eth0" May 17 00:22:02.116370 containerd[1455]: time="2025-05-17T00:22:02.115081770Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:22:02.116370 containerd[1455]: time="2025-05-17T00:22:02.115121750Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:22:02.116370 containerd[1455]: time="2025-05-17T00:22:02.115129660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:02.116370 containerd[1455]: time="2025-05-17T00:22:02.115254750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:02.147544 systemd[1]: Started cri-containerd-1430ef68df624e624de6fbf1f06f1d1de0f620a2b4f1da2ed731065d48dfc44c.scope - libcontainer container 1430ef68df624e624de6fbf1f06f1d1de0f620a2b4f1da2ed731065d48dfc44c. May 17 00:22:02.200200 containerd[1455]: time="2025-05-17T00:22:02.200150128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7998fc854-4sfsk,Uid:66e07ccd-2dbe-42fe-bd10-e349fb811eb6,Namespace:calico-system,Attempt:1,} returns sandbox id \"1430ef68df624e624de6fbf1f06f1d1de0f620a2b4f1da2ed731065d48dfc44c\"" May 17 00:22:02.458422 containerd[1455]: time="2025-05-17T00:22:02.458351819Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:02.459030 containerd[1455]: time="2025-05-17T00:22:02.458980418Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=47252431" May 17 00:22:02.460524 containerd[1455]: time="2025-05-17T00:22:02.459468998Z" level=info msg="ImageCreate event name:\"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:02.461315 containerd[1455]: time="2025-05-17T00:22:02.460998877Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:02.461733 containerd[1455]: time="2025-05-17T00:22:02.461699157Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 2.305946817s" May 17 00:22:02.461772 containerd[1455]: time="2025-05-17T00:22:02.461733157Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 17 00:22:02.466049 containerd[1455]: time="2025-05-17T00:22:02.466017445Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 17 00:22:02.467622 containerd[1455]: time="2025-05-17T00:22:02.467574724Z" level=info msg="CreateContainer within sandbox \"ce3196d2bd97fb2be4fb22cbc585bf5f12f52e7809b24155890c71fa6c41dcec\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:22:02.474902 containerd[1455]: time="2025-05-17T00:22:02.474868030Z" level=info msg="CreateContainer within sandbox \"ce3196d2bd97fb2be4fb22cbc585bf5f12f52e7809b24155890c71fa6c41dcec\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"02866946b1f31af6a3f957fd9038ddf62604a16dadc3cb03c4f85d41209fda3e\"" May 17 00:22:02.475535 containerd[1455]: time="2025-05-17T00:22:02.475317140Z" level=info msg="StartContainer for \"02866946b1f31af6a3f957fd9038ddf62604a16dadc3cb03c4f85d41209fda3e\"" May 17 00:22:02.503325 systemd[1]: Started cri-containerd-02866946b1f31af6a3f957fd9038ddf62604a16dadc3cb03c4f85d41209fda3e.scope - libcontainer container 02866946b1f31af6a3f957fd9038ddf62604a16dadc3cb03c4f85d41209fda3e. May 17 00:22:02.544126 containerd[1455]: time="2025-05-17T00:22:02.544087466Z" level=info msg="StartContainer for \"02866946b1f31af6a3f957fd9038ddf62604a16dadc3cb03c4f85d41209fda3e\" returns successfully" May 17 00:22:02.836534 systemd-networkd[1381]: calicb6bdaf8b50: Gained IPv6LL May 17 00:22:02.843193 containerd[1455]: time="2025-05-17T00:22:02.841752037Z" level=info msg="StopPodSandbox for \"a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4\"" May 17 00:22:02.843744 containerd[1455]: time="2025-05-17T00:22:02.843718436Z" level=info msg="StopPodSandbox for \"864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1\"" May 17 00:22:02.990614 containerd[1455]: 2025-05-17 00:22:02.900 [INFO][4751] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1" May 17 00:22:02.990614 containerd[1455]: 2025-05-17 00:22:02.900 [INFO][4751] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1" iface="eth0" netns="/var/run/netns/cni-44be761b-a652-6e19-a2e0-91c731dc8104" May 17 00:22:02.990614 containerd[1455]: 2025-05-17 00:22:02.901 [INFO][4751] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1" iface="eth0" netns="/var/run/netns/cni-44be761b-a652-6e19-a2e0-91c731dc8104" May 17 00:22:02.990614 containerd[1455]: 2025-05-17 00:22:02.901 [INFO][4751] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1" iface="eth0" netns="/var/run/netns/cni-44be761b-a652-6e19-a2e0-91c731dc8104" May 17 00:22:02.990614 containerd[1455]: 2025-05-17 00:22:02.901 [INFO][4751] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1" May 17 00:22:02.990614 containerd[1455]: 2025-05-17 00:22:02.901 [INFO][4751] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1" May 17 00:22:02.990614 containerd[1455]: 2025-05-17 00:22:02.971 [INFO][4765] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1" HandleID="k8s-pod-network.864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1" Workload="172--233--222--125-k8s-goldmane--78d55f7ddc--htsn7-eth0" May 17 00:22:02.990614 containerd[1455]: 2025-05-17 00:22:02.971 [INFO][4765] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:02.990614 containerd[1455]: 2025-05-17 00:22:02.971 [INFO][4765] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:02.990614 containerd[1455]: 2025-05-17 00:22:02.976 [WARNING][4765] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1" HandleID="k8s-pod-network.864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1" Workload="172--233--222--125-k8s-goldmane--78d55f7ddc--htsn7-eth0" May 17 00:22:02.990614 containerd[1455]: 2025-05-17 00:22:02.976 [INFO][4765] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1" HandleID="k8s-pod-network.864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1" Workload="172--233--222--125-k8s-goldmane--78d55f7ddc--htsn7-eth0" May 17 00:22:02.990614 containerd[1455]: 2025-05-17 00:22:02.978 [INFO][4765] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:02.990614 containerd[1455]: 2025-05-17 00:22:02.985 [INFO][4751] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1" May 17 00:22:02.992808 containerd[1455]: time="2025-05-17T00:22:02.992734571Z" level=info msg="TearDown network for sandbox \"864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1\" successfully" May 17 00:22:02.992808 containerd[1455]: time="2025-05-17T00:22:02.992755911Z" level=info msg="StopPodSandbox for \"864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1\" returns successfully" May 17 00:22:02.999405 systemd[1]: run-netns-cni\x2d44be761b\x2da652\x2d6e19\x2da2e0\x2d91c731dc8104.mount: Deactivated successfully. May 17 00:22:03.015353 containerd[1455]: time="2025-05-17T00:22:03.015335010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-htsn7,Uid:4f18c687-4cb5-49f2-9647-374af2e4bff4,Namespace:calico-system,Attempt:1,}" May 17 00:22:03.017060 containerd[1455]: 2025-05-17 00:22:02.928 [INFO][4750] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4" May 17 00:22:03.017060 containerd[1455]: 2025-05-17 00:22:02.928 [INFO][4750] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4" iface="eth0" netns="/var/run/netns/cni-26ae002a-58f0-f51e-a9d1-fab2b6f3ea78" May 17 00:22:03.017060 containerd[1455]: 2025-05-17 00:22:02.929 [INFO][4750] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4" iface="eth0" netns="/var/run/netns/cni-26ae002a-58f0-f51e-a9d1-fab2b6f3ea78" May 17 00:22:03.017060 containerd[1455]: 2025-05-17 00:22:02.929 [INFO][4750] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4" iface="eth0" netns="/var/run/netns/cni-26ae002a-58f0-f51e-a9d1-fab2b6f3ea78" May 17 00:22:03.017060 containerd[1455]: 2025-05-17 00:22:02.929 [INFO][4750] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4" May 17 00:22:03.017060 containerd[1455]: 2025-05-17 00:22:02.929 [INFO][4750] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4" May 17 00:22:03.017060 containerd[1455]: 2025-05-17 00:22:02.991 [INFO][4770] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4" HandleID="k8s-pod-network.a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4" Workload="172--233--222--125-k8s-coredns--668d6bf9bc--g4khv-eth0" May 17 00:22:03.017060 containerd[1455]: 2025-05-17 00:22:02.991 [INFO][4770] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:03.017060 containerd[1455]: 2025-05-17 00:22:02.991 [INFO][4770] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:03.017060 containerd[1455]: 2025-05-17 00:22:03.003 [WARNING][4770] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4" HandleID="k8s-pod-network.a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4" Workload="172--233--222--125-k8s-coredns--668d6bf9bc--g4khv-eth0" May 17 00:22:03.017060 containerd[1455]: 2025-05-17 00:22:03.003 [INFO][4770] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4" HandleID="k8s-pod-network.a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4" Workload="172--233--222--125-k8s-coredns--668d6bf9bc--g4khv-eth0" May 17 00:22:03.017060 containerd[1455]: 2025-05-17 00:22:03.005 [INFO][4770] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:03.017060 containerd[1455]: 2025-05-17 00:22:03.012 [INFO][4750] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4" May 17 00:22:03.017355 containerd[1455]: time="2025-05-17T00:22:03.017223629Z" level=info msg="TearDown network for sandbox \"a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4\" successfully" May 17 00:22:03.017355 containerd[1455]: time="2025-05-17T00:22:03.017247689Z" level=info msg="StopPodSandbox for \"a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4\" returns successfully" May 17 00:22:03.020723 kubelet[2509]: E0517 00:22:03.017714 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:22:03.021089 containerd[1455]: time="2025-05-17T00:22:03.018403689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g4khv,Uid:eab83464-1af3-4982-95f5-5c46d047a7e6,Namespace:kube-system,Attempt:1,}" May 17 00:22:03.022675 systemd[1]: run-netns-cni\x2d26ae002a\x2d58f0\x2df51e\x2da9d1\x2dfab2b6f3ea78.mount: Deactivated successfully. May 17 00:22:03.051667 kubelet[2509]: I0517 00:22:03.051518 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-77f86bc66d-bzk7k" podStartSLOduration=21.741245338 podStartE2EDuration="24.051502912s" podCreationTimestamp="2025-05-17 00:21:39 +0000 UTC" firstStartedPulling="2025-05-17 00:22:00.155242231 +0000 UTC m=+37.428869273" lastFinishedPulling="2025-05-17 00:22:02.465499795 +0000 UTC m=+39.739126847" observedRunningTime="2025-05-17 00:22:03.051248742 +0000 UTC m=+40.324875784" watchObservedRunningTime="2025-05-17 00:22:03.051502912 +0000 UTC m=+40.325129954" May 17 00:22:03.092664 systemd-networkd[1381]: cali71c6626e5e0: Gained IPv6LL May 17 00:22:03.202257 systemd-networkd[1381]: cali3f4327cf7be: Link UP May 17 00:22:03.202449 systemd-networkd[1381]: cali3f4327cf7be: Gained carrier May 17 00:22:03.219485 containerd[1455]: 2025-05-17 00:22:03.104 [INFO][4792] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--233--222--125-k8s-coredns--668d6bf9bc--g4khv-eth0 coredns-668d6bf9bc- kube-system eab83464-1af3-4982-95f5-5c46d047a7e6 1023 0 2025-05-17 00:21:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-233-222-125 coredns-668d6bf9bc-g4khv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3f4327cf7be [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0646bc98e70017e4df37b6cc37ddc319c782550060021023cb0a58dee5ef6df4" Namespace="kube-system" Pod="coredns-668d6bf9bc-g4khv" WorkloadEndpoint="172--233--222--125-k8s-coredns--668d6bf9bc--g4khv-" May 17 00:22:03.219485 containerd[1455]: 2025-05-17 00:22:03.105 [INFO][4792] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0646bc98e70017e4df37b6cc37ddc319c782550060021023cb0a58dee5ef6df4" Namespace="kube-system" Pod="coredns-668d6bf9bc-g4khv" WorkloadEndpoint="172--233--222--125-k8s-coredns--668d6bf9bc--g4khv-eth0" May 17 00:22:03.219485 containerd[1455]: 2025-05-17 00:22:03.150 [INFO][4808] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0646bc98e70017e4df37b6cc37ddc319c782550060021023cb0a58dee5ef6df4" HandleID="k8s-pod-network.0646bc98e70017e4df37b6cc37ddc319c782550060021023cb0a58dee5ef6df4" Workload="172--233--222--125-k8s-coredns--668d6bf9bc--g4khv-eth0" May 17 00:22:03.219485 containerd[1455]: 2025-05-17 00:22:03.150 [INFO][4808] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0646bc98e70017e4df37b6cc37ddc319c782550060021023cb0a58dee5ef6df4" HandleID="k8s-pod-network.0646bc98e70017e4df37b6cc37ddc319c782550060021023cb0a58dee5ef6df4" Workload="172--233--222--125-k8s-coredns--668d6bf9bc--g4khv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000332460), Attrs:map[string]string{"namespace":"kube-system", "node":"172-233-222-125", "pod":"coredns-668d6bf9bc-g4khv", "timestamp":"2025-05-17 00:22:03.150599822 +0000 UTC"}, Hostname:"172-233-222-125", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:22:03.219485 containerd[1455]: 2025-05-17 00:22:03.151 [INFO][4808] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:03.219485 containerd[1455]: 2025-05-17 00:22:03.152 [INFO][4808] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:03.219485 containerd[1455]: 2025-05-17 00:22:03.152 [INFO][4808] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-233-222-125' May 17 00:22:03.219485 containerd[1455]: 2025-05-17 00:22:03.166 [INFO][4808] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0646bc98e70017e4df37b6cc37ddc319c782550060021023cb0a58dee5ef6df4" host="172-233-222-125" May 17 00:22:03.219485 containerd[1455]: 2025-05-17 00:22:03.176 [INFO][4808] ipam/ipam.go 394: Looking up existing affinities for host host="172-233-222-125" May 17 00:22:03.219485 containerd[1455]: 2025-05-17 00:22:03.181 [INFO][4808] ipam/ipam.go 511: Trying affinity for 192.168.33.128/26 host="172-233-222-125" May 17 00:22:03.219485 containerd[1455]: 2025-05-17 00:22:03.182 [INFO][4808] ipam/ipam.go 158: Attempting to load block cidr=192.168.33.128/26 host="172-233-222-125" May 17 00:22:03.219485 containerd[1455]: 2025-05-17 00:22:03.184 [INFO][4808] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.33.128/26 host="172-233-222-125" May 17 00:22:03.219485 containerd[1455]: 2025-05-17 00:22:03.184 [INFO][4808] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.33.128/26 handle="k8s-pod-network.0646bc98e70017e4df37b6cc37ddc319c782550060021023cb0a58dee5ef6df4" host="172-233-222-125" May 17 00:22:03.219485 containerd[1455]: 2025-05-17 00:22:03.185 [INFO][4808] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0646bc98e70017e4df37b6cc37ddc319c782550060021023cb0a58dee5ef6df4 May 17 00:22:03.219485 containerd[1455]: 2025-05-17 00:22:03.189 [INFO][4808] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.33.128/26 handle="k8s-pod-network.0646bc98e70017e4df37b6cc37ddc319c782550060021023cb0a58dee5ef6df4" host="172-233-222-125" May 17 00:22:03.219485 containerd[1455]: 2025-05-17 00:22:03.193 [INFO][4808] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.33.135/26] block=192.168.33.128/26 handle="k8s-pod-network.0646bc98e70017e4df37b6cc37ddc319c782550060021023cb0a58dee5ef6df4" host="172-233-222-125" May 17 00:22:03.219485 containerd[1455]: 2025-05-17 00:22:03.193 [INFO][4808] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.33.135/26] handle="k8s-pod-network.0646bc98e70017e4df37b6cc37ddc319c782550060021023cb0a58dee5ef6df4" host="172-233-222-125" May 17 00:22:03.219485 containerd[1455]: 2025-05-17 00:22:03.193 [INFO][4808] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:03.219485 containerd[1455]: 2025-05-17 00:22:03.193 [INFO][4808] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.33.135/26] IPv6=[] ContainerID="0646bc98e70017e4df37b6cc37ddc319c782550060021023cb0a58dee5ef6df4" HandleID="k8s-pod-network.0646bc98e70017e4df37b6cc37ddc319c782550060021023cb0a58dee5ef6df4" Workload="172--233--222--125-k8s-coredns--668d6bf9bc--g4khv-eth0" May 17 00:22:03.220281 containerd[1455]: 2025-05-17 00:22:03.197 [INFO][4792] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0646bc98e70017e4df37b6cc37ddc319c782550060021023cb0a58dee5ef6df4" Namespace="kube-system" Pod="coredns-668d6bf9bc-g4khv" WorkloadEndpoint="172--233--222--125-k8s-coredns--668d6bf9bc--g4khv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--125-k8s-coredns--668d6bf9bc--g4khv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"eab83464-1af3-4982-95f5-5c46d047a7e6", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-125", ContainerID:"", Pod:"coredns-668d6bf9bc-g4khv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.33.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3f4327cf7be", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:03.220281 containerd[1455]: 2025-05-17 00:22:03.197 [INFO][4792] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.33.135/32] ContainerID="0646bc98e70017e4df37b6cc37ddc319c782550060021023cb0a58dee5ef6df4" Namespace="kube-system" Pod="coredns-668d6bf9bc-g4khv" WorkloadEndpoint="172--233--222--125-k8s-coredns--668d6bf9bc--g4khv-eth0" May 17 00:22:03.220281 containerd[1455]: 2025-05-17 00:22:03.197 [INFO][4792] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3f4327cf7be ContainerID="0646bc98e70017e4df37b6cc37ddc319c782550060021023cb0a58dee5ef6df4" Namespace="kube-system" Pod="coredns-668d6bf9bc-g4khv" WorkloadEndpoint="172--233--222--125-k8s-coredns--668d6bf9bc--g4khv-eth0" May 17 00:22:03.220281 containerd[1455]: 2025-05-17 00:22:03.203 [INFO][4792] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0646bc98e70017e4df37b6cc37ddc319c782550060021023cb0a58dee5ef6df4" Namespace="kube-system" Pod="coredns-668d6bf9bc-g4khv" WorkloadEndpoint="172--233--222--125-k8s-coredns--668d6bf9bc--g4khv-eth0" May 17 00:22:03.220281 containerd[1455]: 2025-05-17 00:22:03.204 [INFO][4792] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0646bc98e70017e4df37b6cc37ddc319c782550060021023cb0a58dee5ef6df4" Namespace="kube-system" Pod="coredns-668d6bf9bc-g4khv" WorkloadEndpoint="172--233--222--125-k8s-coredns--668d6bf9bc--g4khv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--125-k8s-coredns--668d6bf9bc--g4khv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"eab83464-1af3-4982-95f5-5c46d047a7e6", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-125", ContainerID:"0646bc98e70017e4df37b6cc37ddc319c782550060021023cb0a58dee5ef6df4", Pod:"coredns-668d6bf9bc-g4khv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.33.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3f4327cf7be", MAC:"96:2d:01:35:d1:88", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:03.220281 containerd[1455]: 2025-05-17 00:22:03.215 [INFO][4792] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0646bc98e70017e4df37b6cc37ddc319c782550060021023cb0a58dee5ef6df4" Namespace="kube-system" Pod="coredns-668d6bf9bc-g4khv" WorkloadEndpoint="172--233--222--125-k8s-coredns--668d6bf9bc--g4khv-eth0" May 17 00:22:03.243195 containerd[1455]: time="2025-05-17T00:22:03.242143387Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:22:03.243195 containerd[1455]: time="2025-05-17T00:22:03.242241917Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:22:03.243195 containerd[1455]: time="2025-05-17T00:22:03.242255237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:03.243195 containerd[1455]: time="2025-05-17T00:22:03.242349277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:03.265396 systemd[1]: Started cri-containerd-0646bc98e70017e4df37b6cc37ddc319c782550060021023cb0a58dee5ef6df4.scope - libcontainer container 0646bc98e70017e4df37b6cc37ddc319c782550060021023cb0a58dee5ef6df4. May 17 00:22:03.311370 systemd-networkd[1381]: cali07c17a7dae9: Link UP May 17 00:22:03.314673 systemd-networkd[1381]: cali07c17a7dae9: Gained carrier May 17 00:22:03.338972 containerd[1455]: time="2025-05-17T00:22:03.338924388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g4khv,Uid:eab83464-1af3-4982-95f5-5c46d047a7e6,Namespace:kube-system,Attempt:1,} returns sandbox id \"0646bc98e70017e4df37b6cc37ddc319c782550060021023cb0a58dee5ef6df4\"" May 17 00:22:03.342750 containerd[1455]: 2025-05-17 00:22:03.113 [INFO][4780] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--233--222--125-k8s-goldmane--78d55f7ddc--htsn7-eth0 goldmane-78d55f7ddc- calico-system 4f18c687-4cb5-49f2-9647-374af2e4bff4 1022 0 2025-05-17 00:21:41 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:78d55f7ddc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 172-233-222-125 goldmane-78d55f7ddc-htsn7 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali07c17a7dae9 [] [] }} ContainerID="26668b9ea6d0953c8ffd99491332534b989d8d0c696606fa84e19fda734013ea" Namespace="calico-system" Pod="goldmane-78d55f7ddc-htsn7" WorkloadEndpoint="172--233--222--125-k8s-goldmane--78d55f7ddc--htsn7-" May 17 00:22:03.342750 containerd[1455]: 2025-05-17 00:22:03.113 [INFO][4780] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="26668b9ea6d0953c8ffd99491332534b989d8d0c696606fa84e19fda734013ea" Namespace="calico-system" Pod="goldmane-78d55f7ddc-htsn7" WorkloadEndpoint="172--233--222--125-k8s-goldmane--78d55f7ddc--htsn7-eth0" May 17 00:22:03.342750 containerd[1455]: 2025-05-17 00:22:03.175 [INFO][4813] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="26668b9ea6d0953c8ffd99491332534b989d8d0c696606fa84e19fda734013ea" HandleID="k8s-pod-network.26668b9ea6d0953c8ffd99491332534b989d8d0c696606fa84e19fda734013ea" Workload="172--233--222--125-k8s-goldmane--78d55f7ddc--htsn7-eth0" May 17 00:22:03.342750 containerd[1455]: 2025-05-17 00:22:03.176 [INFO][4813] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="26668b9ea6d0953c8ffd99491332534b989d8d0c696606fa84e19fda734013ea" HandleID="k8s-pod-network.26668b9ea6d0953c8ffd99491332534b989d8d0c696606fa84e19fda734013ea" Workload="172--233--222--125-k8s-goldmane--78d55f7ddc--htsn7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002350d0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-233-222-125", "pod":"goldmane-78d55f7ddc-htsn7", "timestamp":"2025-05-17 00:22:03.17514868 +0000 UTC"}, Hostname:"172-233-222-125", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:22:03.342750 containerd[1455]: 2025-05-17 00:22:03.176 [INFO][4813] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:03.342750 containerd[1455]: 2025-05-17 00:22:03.193 [INFO][4813] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:03.342750 containerd[1455]: 2025-05-17 00:22:03.193 [INFO][4813] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-233-222-125' May 17 00:22:03.342750 containerd[1455]: 2025-05-17 00:22:03.267 [INFO][4813] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.26668b9ea6d0953c8ffd99491332534b989d8d0c696606fa84e19fda734013ea" host="172-233-222-125" May 17 00:22:03.342750 containerd[1455]: 2025-05-17 00:22:03.273 [INFO][4813] ipam/ipam.go 394: Looking up existing affinities for host host="172-233-222-125" May 17 00:22:03.342750 containerd[1455]: 2025-05-17 00:22:03.282 [INFO][4813] ipam/ipam.go 511: Trying affinity for 192.168.33.128/26 host="172-233-222-125" May 17 00:22:03.342750 containerd[1455]: 2025-05-17 00:22:03.285 [INFO][4813] ipam/ipam.go 158: Attempting to load block cidr=192.168.33.128/26 host="172-233-222-125" May 17 00:22:03.342750 containerd[1455]: 2025-05-17 00:22:03.288 [INFO][4813] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.33.128/26 host="172-233-222-125" May 17 00:22:03.342750 containerd[1455]: 2025-05-17 00:22:03.288 [INFO][4813] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.33.128/26 handle="k8s-pod-network.26668b9ea6d0953c8ffd99491332534b989d8d0c696606fa84e19fda734013ea" host="172-233-222-125" May 17 00:22:03.342750 containerd[1455]: 2025-05-17 00:22:03.290 [INFO][4813] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.26668b9ea6d0953c8ffd99491332534b989d8d0c696606fa84e19fda734013ea May 17 00:22:03.342750 containerd[1455]: 2025-05-17 00:22:03.295 [INFO][4813] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.33.128/26 handle="k8s-pod-network.26668b9ea6d0953c8ffd99491332534b989d8d0c696606fa84e19fda734013ea" host="172-233-222-125" May 17 00:22:03.342750 containerd[1455]: 2025-05-17 00:22:03.299 [INFO][4813] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.33.136/26] block=192.168.33.128/26 handle="k8s-pod-network.26668b9ea6d0953c8ffd99491332534b989d8d0c696606fa84e19fda734013ea" host="172-233-222-125" May 17 00:22:03.342750 containerd[1455]: 2025-05-17 00:22:03.299 [INFO][4813] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.33.136/26] handle="k8s-pod-network.26668b9ea6d0953c8ffd99491332534b989d8d0c696606fa84e19fda734013ea" host="172-233-222-125" May 17 00:22:03.342750 containerd[1455]: 2025-05-17 00:22:03.299 [INFO][4813] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:03.342750 containerd[1455]: 2025-05-17 00:22:03.299 [INFO][4813] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.33.136/26] IPv6=[] ContainerID="26668b9ea6d0953c8ffd99491332534b989d8d0c696606fa84e19fda734013ea" HandleID="k8s-pod-network.26668b9ea6d0953c8ffd99491332534b989d8d0c696606fa84e19fda734013ea" Workload="172--233--222--125-k8s-goldmane--78d55f7ddc--htsn7-eth0" May 17 00:22:03.343206 containerd[1455]: 2025-05-17 00:22:03.303 [INFO][4780] cni-plugin/k8s.go 418: Populated endpoint ContainerID="26668b9ea6d0953c8ffd99491332534b989d8d0c696606fa84e19fda734013ea" Namespace="calico-system" Pod="goldmane-78d55f7ddc-htsn7" WorkloadEndpoint="172--233--222--125-k8s-goldmane--78d55f7ddc--htsn7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--125-k8s-goldmane--78d55f7ddc--htsn7-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"4f18c687-4cb5-49f2-9647-374af2e4bff4", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-125", ContainerID:"", Pod:"goldmane-78d55f7ddc-htsn7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.33.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali07c17a7dae9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:03.343206 containerd[1455]: 2025-05-17 00:22:03.304 [INFO][4780] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.33.136/32] ContainerID="26668b9ea6d0953c8ffd99491332534b989d8d0c696606fa84e19fda734013ea" Namespace="calico-system" Pod="goldmane-78d55f7ddc-htsn7" WorkloadEndpoint="172--233--222--125-k8s-goldmane--78d55f7ddc--htsn7-eth0" May 17 00:22:03.343206 containerd[1455]: 2025-05-17 00:22:03.304 [INFO][4780] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali07c17a7dae9 ContainerID="26668b9ea6d0953c8ffd99491332534b989d8d0c696606fa84e19fda734013ea" Namespace="calico-system" Pod="goldmane-78d55f7ddc-htsn7" WorkloadEndpoint="172--233--222--125-k8s-goldmane--78d55f7ddc--htsn7-eth0" May 17 00:22:03.343206 containerd[1455]: 2025-05-17 00:22:03.314 [INFO][4780] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="26668b9ea6d0953c8ffd99491332534b989d8d0c696606fa84e19fda734013ea" Namespace="calico-system" Pod="goldmane-78d55f7ddc-htsn7" WorkloadEndpoint="172--233--222--125-k8s-goldmane--78d55f7ddc--htsn7-eth0" May 17 00:22:03.343206 containerd[1455]: 2025-05-17 00:22:03.318 [INFO][4780] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="26668b9ea6d0953c8ffd99491332534b989d8d0c696606fa84e19fda734013ea" Namespace="calico-system" Pod="goldmane-78d55f7ddc-htsn7" WorkloadEndpoint="172--233--222--125-k8s-goldmane--78d55f7ddc--htsn7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--125-k8s-goldmane--78d55f7ddc--htsn7-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"4f18c687-4cb5-49f2-9647-374af2e4bff4", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-125", ContainerID:"26668b9ea6d0953c8ffd99491332534b989d8d0c696606fa84e19fda734013ea", Pod:"goldmane-78d55f7ddc-htsn7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.33.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali07c17a7dae9", MAC:"2e:27:77:45:d5:30", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:03.343206 containerd[1455]: 2025-05-17 00:22:03.336 [INFO][4780] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="26668b9ea6d0953c8ffd99491332534b989d8d0c696606fa84e19fda734013ea" Namespace="calico-system" Pod="goldmane-78d55f7ddc-htsn7" WorkloadEndpoint="172--233--222--125-k8s-goldmane--78d55f7ddc--htsn7-eth0" May 17 00:22:03.352271 kubelet[2509]: E0517 00:22:03.352057 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:22:03.361412 containerd[1455]: time="2025-05-17T00:22:03.361359967Z" level=info msg="CreateContainer within sandbox \"0646bc98e70017e4df37b6cc37ddc319c782550060021023cb0a58dee5ef6df4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:22:03.379969 containerd[1455]: time="2025-05-17T00:22:03.375397240Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:22:03.379969 containerd[1455]: time="2025-05-17T00:22:03.375446930Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:22:03.379969 containerd[1455]: time="2025-05-17T00:22:03.375459590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:03.379969 containerd[1455]: time="2025-05-17T00:22:03.375530100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:03.386341 containerd[1455]: time="2025-05-17T00:22:03.386315715Z" level=info msg="CreateContainer within sandbox \"0646bc98e70017e4df37b6cc37ddc319c782550060021023cb0a58dee5ef6df4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3e8d54dc58ab81905cfed8ca0529dbc671056b2b870ede6152f53242c7c9a782\"" May 17 00:22:03.389428 containerd[1455]: time="2025-05-17T00:22:03.389397403Z" level=info msg="StartContainer for \"3e8d54dc58ab81905cfed8ca0529dbc671056b2b870ede6152f53242c7c9a782\"" May 17 00:22:03.409336 systemd[1]: Started cri-containerd-26668b9ea6d0953c8ffd99491332534b989d8d0c696606fa84e19fda734013ea.scope - libcontainer container 26668b9ea6d0953c8ffd99491332534b989d8d0c696606fa84e19fda734013ea. May 17 00:22:03.472700 systemd[1]: Started cri-containerd-3e8d54dc58ab81905cfed8ca0529dbc671056b2b870ede6152f53242c7c9a782.scope - libcontainer container 3e8d54dc58ab81905cfed8ca0529dbc671056b2b870ede6152f53242c7c9a782. May 17 00:22:03.513132 containerd[1455]: time="2025-05-17T00:22:03.513102771Z" level=info msg="StartContainer for \"3e8d54dc58ab81905cfed8ca0529dbc671056b2b870ede6152f53242c7c9a782\" returns successfully" May 17 00:22:03.610941 containerd[1455]: time="2025-05-17T00:22:03.610836652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-htsn7,Uid:4f18c687-4cb5-49f2-9647-374af2e4bff4,Namespace:calico-system,Attempt:1,} returns sandbox id \"26668b9ea6d0953c8ffd99491332534b989d8d0c696606fa84e19fda734013ea\"" May 17 00:22:03.656960 containerd[1455]: time="2025-05-17T00:22:03.656220900Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:03.656960 containerd[1455]: time="2025-05-17T00:22:03.656918149Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8758390" May 17 00:22:03.657206 containerd[1455]: time="2025-05-17T00:22:03.657162049Z" level=info msg="ImageCreate event name:\"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:03.658733 containerd[1455]: time="2025-05-17T00:22:03.658708898Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:03.659370 containerd[1455]: time="2025-05-17T00:22:03.659335388Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"10251093\" in 1.193284973s" May 17 00:22:03.659421 containerd[1455]: time="2025-05-17T00:22:03.659368358Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\"" May 17 00:22:03.660588 containerd[1455]: time="2025-05-17T00:22:03.660560167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 00:22:03.671662 containerd[1455]: time="2025-05-17T00:22:03.671624512Z" level=info msg="CreateContainer within sandbox \"85120c538511467f4dac0433780ced366a18dce0f731c9beaac436a0a907d317\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 17 00:22:03.685006 containerd[1455]: time="2025-05-17T00:22:03.684964665Z" level=info msg="CreateContainer within sandbox \"85120c538511467f4dac0433780ced366a18dce0f731c9beaac436a0a907d317\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"3ab88779326fecb1d0c5f5d7130a9c667223e11672d884b46cb56ae4f8753118\"" May 17 00:22:03.685690 containerd[1455]: time="2025-05-17T00:22:03.685661425Z" level=info msg="StartContainer for \"3ab88779326fecb1d0c5f5d7130a9c667223e11672d884b46cb56ae4f8753118\"" May 17 00:22:03.720296 systemd[1]: Started cri-containerd-3ab88779326fecb1d0c5f5d7130a9c667223e11672d884b46cb56ae4f8753118.scope - libcontainer container 3ab88779326fecb1d0c5f5d7130a9c667223e11672d884b46cb56ae4f8753118. May 17 00:22:03.755629 containerd[1455]: time="2025-05-17T00:22:03.755340410Z" level=info msg="StartContainer for \"3ab88779326fecb1d0c5f5d7130a9c667223e11672d884b46cb56ae4f8753118\" returns successfully" May 17 00:22:03.821126 containerd[1455]: time="2025-05-17T00:22:03.821085347Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:03.821627 containerd[1455]: time="2025-05-17T00:22:03.821582537Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=77" May 17 00:22:03.823452 containerd[1455]: time="2025-05-17T00:22:03.823426406Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 162.835649ms" May 17 00:22:03.823499 containerd[1455]: time="2025-05-17T00:22:03.823454176Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 17 00:22:03.824539 containerd[1455]: time="2025-05-17T00:22:03.824092606Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 17 00:22:03.826669 containerd[1455]: time="2025-05-17T00:22:03.826644844Z" level=info msg="CreateContainer within sandbox \"589ccdba7d3f1fb0a5df1bf347024b9737eab1218a64d690193e1d448ef1c6a9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:22:03.841623 containerd[1455]: time="2025-05-17T00:22:03.841598087Z" level=info msg="CreateContainer within sandbox \"589ccdba7d3f1fb0a5df1bf347024b9737eab1218a64d690193e1d448ef1c6a9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f2e11216a6f8d8e2e0acc07dff809f871129815195aa53892c44dbbe62b52721\"" May 17 00:22:03.842604 containerd[1455]: time="2025-05-17T00:22:03.842581196Z" level=info msg="StartContainer for \"f2e11216a6f8d8e2e0acc07dff809f871129815195aa53892c44dbbe62b52721\"" May 17 00:22:03.866299 systemd[1]: Started cri-containerd-f2e11216a6f8d8e2e0acc07dff809f871129815195aa53892c44dbbe62b52721.scope - libcontainer container f2e11216a6f8d8e2e0acc07dff809f871129815195aa53892c44dbbe62b52721. May 17 00:22:03.901636 containerd[1455]: time="2025-05-17T00:22:03.901558497Z" level=info msg="StartContainer for \"f2e11216a6f8d8e2e0acc07dff809f871129815195aa53892c44dbbe62b52721\" returns successfully" May 17 00:22:04.039501 kubelet[2509]: E0517 00:22:04.039462 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:22:04.040893 kubelet[2509]: I0517 00:22:04.040868 2509 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:22:04.073541 kubelet[2509]: I0517 00:22:04.073497 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-g4khv" podStartSLOduration=35.073486051 podStartE2EDuration="35.073486051s" podCreationTimestamp="2025-05-17 00:21:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:22:04.061800047 +0000 UTC m=+41.335427089" watchObservedRunningTime="2025-05-17 00:22:04.073486051 +0000 UTC m=+41.347113093" May 17 00:22:04.073640 kubelet[2509]: I0517 00:22:04.073585 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-77f86bc66d-9ctqn" podStartSLOduration=22.352788161 podStartE2EDuration="25.073580031s" podCreationTimestamp="2025-05-17 00:21:39 +0000 UTC" firstStartedPulling="2025-05-17 00:22:01.103104346 +0000 UTC m=+38.376731378" lastFinishedPulling="2025-05-17 00:22:03.823896196 +0000 UTC m=+41.097523248" observedRunningTime="2025-05-17 00:22:04.072336422 +0000 UTC m=+41.345963464" watchObservedRunningTime="2025-05-17 00:22:04.073580031 +0000 UTC m=+41.347207073" May 17 00:22:04.246460 systemd-networkd[1381]: cali3f4327cf7be: Gained IPv6LL May 17 00:22:04.628357 systemd-networkd[1381]: cali07c17a7dae9: Gained IPv6LL May 17 00:22:05.044147 kubelet[2509]: E0517 00:22:05.043712 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:22:05.273703 kubelet[2509]: I0517 00:22:05.273581 2509 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:22:05.386291 containerd[1455]: time="2025-05-17T00:22:05.386203344Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:05.387248 containerd[1455]: time="2025-05-17T00:22:05.387166904Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.0: active requests=0, bytes read=51178512" May 17 00:22:05.387628 containerd[1455]: time="2025-05-17T00:22:05.387609054Z" level=info msg="ImageCreate event name:\"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:05.389758 containerd[1455]: time="2025-05-17T00:22:05.389735153Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:05.390328 containerd[1455]: time="2025-05-17T00:22:05.390307862Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" with image id \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\", size \"52671183\" in 1.566189856s" May 17 00:22:05.390365 containerd[1455]: time="2025-05-17T00:22:05.390332902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\"" May 17 00:22:05.392322 containerd[1455]: time="2025-05-17T00:22:05.392068561Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:22:05.404336 containerd[1455]: time="2025-05-17T00:22:05.402838876Z" level=info msg="CreateContainer within sandbox \"1430ef68df624e624de6fbf1f06f1d1de0f620a2b4f1da2ed731065d48dfc44c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 17 00:22:05.414363 containerd[1455]: time="2025-05-17T00:22:05.414338850Z" level=info msg="CreateContainer within sandbox \"1430ef68df624e624de6fbf1f06f1d1de0f620a2b4f1da2ed731065d48dfc44c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"d225b1535238f4d50a8a1e7ffe691ed7a41dd6bdeae1fd61de63bd19df1c7c36\"" May 17 00:22:05.416202 containerd[1455]: time="2025-05-17T00:22:05.415415180Z" level=info msg="StartContainer for \"d225b1535238f4d50a8a1e7ffe691ed7a41dd6bdeae1fd61de63bd19df1c7c36\"" May 17 00:22:05.462300 systemd[1]: Started cri-containerd-d225b1535238f4d50a8a1e7ffe691ed7a41dd6bdeae1fd61de63bd19df1c7c36.scope - libcontainer container d225b1535238f4d50a8a1e7ffe691ed7a41dd6bdeae1fd61de63bd19df1c7c36. May 17 00:22:05.501723 containerd[1455]: time="2025-05-17T00:22:05.501671297Z" level=info msg="StartContainer for \"d225b1535238f4d50a8a1e7ffe691ed7a41dd6bdeae1fd61de63bd19df1c7c36\" returns successfully" May 17 00:22:05.509506 containerd[1455]: time="2025-05-17T00:22:05.509481373Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:22:05.510228 containerd[1455]: time="2025-05-17T00:22:05.510207922Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:22:05.510386 containerd[1455]: time="2025-05-17T00:22:05.510286422Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:22:05.510549 kubelet[2509]: E0517 00:22:05.510504 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:22:05.510549 kubelet[2509]: E0517 00:22:05.510542 2509 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:22:05.511016 containerd[1455]: time="2025-05-17T00:22:05.510949702Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 17 00:22:05.512762 kubelet[2509]: E0517 00:22:05.512715 2509 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7rqcx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-htsn7_calico-system(4f18c687-4cb5-49f2-9647-374af2e4bff4): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:22:05.514057 kubelet[2509]: E0517 00:22:05.514019 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-htsn7" podUID="4f18c687-4cb5-49f2-9647-374af2e4bff4" May 17 00:22:06.047368 kubelet[2509]: E0517 00:22:06.047332 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-htsn7" podUID="4f18c687-4cb5-49f2-9647-374af2e4bff4" May 17 00:22:06.067320 kubelet[2509]: I0517 00:22:06.067137 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7998fc854-4sfsk" podStartSLOduration=20.878630925 podStartE2EDuration="24.06711036s" podCreationTimestamp="2025-05-17 00:21:42 +0000 UTC" firstStartedPulling="2025-05-17 00:22:02.202537827 +0000 UTC m=+39.476164859" lastFinishedPulling="2025-05-17 00:22:05.391017252 +0000 UTC m=+42.664644294" observedRunningTime="2025-05-17 00:22:06.06611493 +0000 UTC m=+43.339741972" watchObservedRunningTime="2025-05-17 00:22:06.06711036 +0000 UTC m=+43.340737402" May 17 00:22:06.573044 containerd[1455]: time="2025-05-17T00:22:06.572995727Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:06.573840 containerd[1455]: time="2025-05-17T00:22:06.573669516Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=14705639" May 17 00:22:06.574607 containerd[1455]: time="2025-05-17T00:22:06.574570957Z" level=info msg="ImageCreate event name:\"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:06.577091 containerd[1455]: time="2025-05-17T00:22:06.577074159Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:22:06.577463 containerd[1455]: time="2025-05-17T00:22:06.577439398Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"16198294\" in 1.066417186s" May 17 00:22:06.577498 containerd[1455]: time="2025-05-17T00:22:06.577464017Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\"" May 17 00:22:06.579229 containerd[1455]: time="2025-05-17T00:22:06.579131924Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:22:06.579738 containerd[1455]: time="2025-05-17T00:22:06.579719136Z" level=info msg="CreateContainer within sandbox \"85120c538511467f4dac0433780ced366a18dce0f731c9beaac436a0a907d317\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 17 00:22:06.604111 containerd[1455]: time="2025-05-17T00:22:06.604091501Z" level=info msg="CreateContainer within sandbox \"85120c538511467f4dac0433780ced366a18dce0f731c9beaac436a0a907d317\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7b05d9c26fe044a29a6cc199553fe446eea11f2dd5aa99a86e79e4deaa49b85d\"" May 17 00:22:06.604917 containerd[1455]: time="2025-05-17T00:22:06.604881546Z" level=info msg="StartContainer for \"7b05d9c26fe044a29a6cc199553fe446eea11f2dd5aa99a86e79e4deaa49b85d\"" May 17 00:22:06.630299 systemd[1]: Started cri-containerd-7b05d9c26fe044a29a6cc199553fe446eea11f2dd5aa99a86e79e4deaa49b85d.scope - libcontainer container 7b05d9c26fe044a29a6cc199553fe446eea11f2dd5aa99a86e79e4deaa49b85d. May 17 00:22:06.656882 containerd[1455]: time="2025-05-17T00:22:06.656850026Z" level=info msg="StartContainer for \"7b05d9c26fe044a29a6cc199553fe446eea11f2dd5aa99a86e79e4deaa49b85d\" returns successfully" May 17 00:22:06.692654 containerd[1455]: time="2025-05-17T00:22:06.692617223Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:22:06.693569 containerd[1455]: time="2025-05-17T00:22:06.693549945Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:22:06.693628 containerd[1455]: time="2025-05-17T00:22:06.693605853Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:22:06.694565 kubelet[2509]: E0517 00:22:06.693740 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:22:06.694565 kubelet[2509]: E0517 00:22:06.693789 2509 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:22:06.694565 kubelet[2509]: E0517 00:22:06.693894 2509 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:839122e2a12b4271ae6fd9949780c33e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-96t5r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-86c8456b49-frszb_calico-system(7b75dfdd-c774-4c10-b431-7a20d6743288): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:22:06.696099 containerd[1455]: time="2025-05-17T00:22:06.696080815Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:22:06.799458 containerd[1455]: time="2025-05-17T00:22:06.799431402Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:22:06.800431 containerd[1455]: time="2025-05-17T00:22:06.800406281Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:22:06.800571 containerd[1455]: time="2025-05-17T00:22:06.800456050Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:22:06.800907 kubelet[2509]: E0517 00:22:06.800597 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:22:06.800907 kubelet[2509]: E0517 00:22:06.800621 2509 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:22:06.800907 kubelet[2509]: E0517 00:22:06.800694 2509 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-96t5r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-86c8456b49-frszb_calico-system(7b75dfdd-c774-4c10-b431-7a20d6743288): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:22:06.801954 kubelet[2509]: E0517 00:22:06.801898 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-86c8456b49-frszb" podUID="7b75dfdd-c774-4c10-b431-7a20d6743288" May 17 00:22:06.902864 kubelet[2509]: I0517 00:22:06.902744 2509 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 17 00:22:06.902864 kubelet[2509]: I0517 00:22:06.902773 2509 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 17 00:22:07.064534 kubelet[2509]: I0517 00:22:07.064462 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-mfhj5" podStartSLOduration=18.726773651 podStartE2EDuration="25.064446599s" podCreationTimestamp="2025-05-17 00:21:42 +0000 UTC" firstStartedPulling="2025-05-17 00:22:00.240745278 +0000 UTC m=+37.514372320" lastFinishedPulling="2025-05-17 00:22:06.578418236 +0000 UTC m=+43.852045268" observedRunningTime="2025-05-17 00:22:07.063923124 +0000 UTC m=+44.337550166" watchObservedRunningTime="2025-05-17 00:22:07.064446599 +0000 UTC m=+44.338073641" May 17 00:22:19.841985 kubelet[2509]: E0517 00:22:19.841868 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-86c8456b49-frszb" podUID="7b75dfdd-c774-4c10-b431-7a20d6743288" May 17 00:22:20.841587 containerd[1455]: time="2025-05-17T00:22:20.841361367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:22:20.938326 containerd[1455]: time="2025-05-17T00:22:20.938144248Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:22:20.941212 containerd[1455]: time="2025-05-17T00:22:20.939363322Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:22:20.941212 containerd[1455]: time="2025-05-17T00:22:20.939432422Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:22:20.941379 kubelet[2509]: E0517 00:22:20.940650 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:22:20.941379 kubelet[2509]: E0517 00:22:20.940682 2509 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:22:20.941379 kubelet[2509]: E0517 00:22:20.940811 2509 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7rqcx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-htsn7_calico-system(4f18c687-4cb5-49f2-9647-374af2e4bff4): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:22:20.942303 kubelet[2509]: E0517 00:22:20.942272 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-htsn7" podUID="4f18c687-4cb5-49f2-9647-374af2e4bff4" May 17 00:22:21.991068 systemd[1]: run-containerd-runc-k8s.io-57a96ffc04bfbeb811ded530c0e598685cd5f7415b8a524bf25f7039bdb7b961-runc.dTst5E.mount: Deactivated successfully. May 17 00:22:22.822399 containerd[1455]: time="2025-05-17T00:22:22.822353401Z" level=info msg="StopPodSandbox for \"cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502\"" May 17 00:22:22.930168 containerd[1455]: 2025-05-17 00:22:22.883 [WARNING][5221] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--125-k8s-calico--apiserver--77f86bc66d--9ctqn-eth0", GenerateName:"calico-apiserver-77f86bc66d-", Namespace:"calico-apiserver", SelfLink:"", UID:"708b28d1-b868-4f8b-b7c9-b5fa6b493a92", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77f86bc66d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-125", ContainerID:"589ccdba7d3f1fb0a5df1bf347024b9737eab1218a64d690193e1d448ef1c6a9", Pod:"calico-apiserver-77f86bc66d-9ctqn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicb6bdaf8b50", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:22.930168 containerd[1455]: 2025-05-17 00:22:22.883 [INFO][5221] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502" May 17 00:22:22.930168 containerd[1455]: 2025-05-17 00:22:22.883 [INFO][5221] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502" iface="eth0" netns="" May 17 00:22:22.930168 containerd[1455]: 2025-05-17 00:22:22.883 [INFO][5221] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502" May 17 00:22:22.930168 containerd[1455]: 2025-05-17 00:22:22.883 [INFO][5221] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502" May 17 00:22:22.930168 containerd[1455]: 2025-05-17 00:22:22.917 [INFO][5230] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502" HandleID="k8s-pod-network.cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502" Workload="172--233--222--125-k8s-calico--apiserver--77f86bc66d--9ctqn-eth0" May 17 00:22:22.930168 containerd[1455]: 2025-05-17 00:22:22.918 [INFO][5230] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:22.930168 containerd[1455]: 2025-05-17 00:22:22.918 [INFO][5230] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:22.930168 containerd[1455]: 2025-05-17 00:22:22.923 [WARNING][5230] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502" HandleID="k8s-pod-network.cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502" Workload="172--233--222--125-k8s-calico--apiserver--77f86bc66d--9ctqn-eth0" May 17 00:22:22.930168 containerd[1455]: 2025-05-17 00:22:22.923 [INFO][5230] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502" HandleID="k8s-pod-network.cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502" Workload="172--233--222--125-k8s-calico--apiserver--77f86bc66d--9ctqn-eth0" May 17 00:22:22.930168 containerd[1455]: 2025-05-17 00:22:22.924 [INFO][5230] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:22.930168 containerd[1455]: 2025-05-17 00:22:22.927 [INFO][5221] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502" May 17 00:22:22.930737 containerd[1455]: time="2025-05-17T00:22:22.930224771Z" level=info msg="TearDown network for sandbox \"cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502\" successfully" May 17 00:22:22.930737 containerd[1455]: time="2025-05-17T00:22:22.930247121Z" level=info msg="StopPodSandbox for \"cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502\" returns successfully" May 17 00:22:22.930938 containerd[1455]: time="2025-05-17T00:22:22.930907703Z" level=info msg="RemovePodSandbox for \"cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502\"" May 17 00:22:22.930973 containerd[1455]: time="2025-05-17T00:22:22.930938723Z" level=info msg="Forcibly stopping sandbox \"cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502\"" May 17 00:22:23.002155 containerd[1455]: 2025-05-17 00:22:22.967 [WARNING][5244] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--125-k8s-calico--apiserver--77f86bc66d--9ctqn-eth0", GenerateName:"calico-apiserver-77f86bc66d-", Namespace:"calico-apiserver", SelfLink:"", UID:"708b28d1-b868-4f8b-b7c9-b5fa6b493a92", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77f86bc66d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-125", ContainerID:"589ccdba7d3f1fb0a5df1bf347024b9737eab1218a64d690193e1d448ef1c6a9", Pod:"calico-apiserver-77f86bc66d-9ctqn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicb6bdaf8b50", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:23.002155 containerd[1455]: 2025-05-17 00:22:22.967 [INFO][5244] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502" May 17 00:22:23.002155 containerd[1455]: 2025-05-17 00:22:22.967 [INFO][5244] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502" iface="eth0" netns="" May 17 00:22:23.002155 containerd[1455]: 2025-05-17 00:22:22.967 [INFO][5244] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502" May 17 00:22:23.002155 containerd[1455]: 2025-05-17 00:22:22.968 [INFO][5244] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502" May 17 00:22:23.002155 containerd[1455]: 2025-05-17 00:22:22.985 [INFO][5251] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502" HandleID="k8s-pod-network.cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502" Workload="172--233--222--125-k8s-calico--apiserver--77f86bc66d--9ctqn-eth0" May 17 00:22:23.002155 containerd[1455]: 2025-05-17 00:22:22.986 [INFO][5251] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:23.002155 containerd[1455]: 2025-05-17 00:22:22.986 [INFO][5251] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:23.002155 containerd[1455]: 2025-05-17 00:22:22.992 [WARNING][5251] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502" HandleID="k8s-pod-network.cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502" Workload="172--233--222--125-k8s-calico--apiserver--77f86bc66d--9ctqn-eth0" May 17 00:22:23.002155 containerd[1455]: 2025-05-17 00:22:22.992 [INFO][5251] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502" HandleID="k8s-pod-network.cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502" Workload="172--233--222--125-k8s-calico--apiserver--77f86bc66d--9ctqn-eth0" May 17 00:22:23.002155 containerd[1455]: 2025-05-17 00:22:22.994 [INFO][5251] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:23.002155 containerd[1455]: 2025-05-17 00:22:23.000 [INFO][5244] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502" May 17 00:22:23.002590 containerd[1455]: time="2025-05-17T00:22:23.002213144Z" level=info msg="TearDown network for sandbox \"cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502\" successfully" May 17 00:22:23.007195 containerd[1455]: time="2025-05-17T00:22:23.006233331Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:22:23.007195 containerd[1455]: time="2025-05-17T00:22:23.006321370Z" level=info msg="RemovePodSandbox \"cb1f87896fb527f3a336b5c516ee72df0b6f5b648646325999e6c2926ceb0502\" returns successfully" May 17 00:22:23.007195 containerd[1455]: time="2025-05-17T00:22:23.006833275Z" level=info msg="StopPodSandbox for \"6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8\"" May 17 00:22:23.094025 containerd[1455]: 2025-05-17 00:22:23.039 [WARNING][5265] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--125-k8s-calico--apiserver--77f86bc66d--bzk7k-eth0", GenerateName:"calico-apiserver-77f86bc66d-", Namespace:"calico-apiserver", SelfLink:"", UID:"ecdb2099-9206-4f56-bd2f-4d5b7338559a", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77f86bc66d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-125", ContainerID:"ce3196d2bd97fb2be4fb22cbc585bf5f12f52e7809b24155890c71fa6c41dcec", Pod:"calico-apiserver-77f86bc66d-bzk7k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib7075275922", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:23.094025 containerd[1455]: 2025-05-17 00:22:23.039 [INFO][5265] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8" May 17 00:22:23.094025 containerd[1455]: 2025-05-17 00:22:23.039 [INFO][5265] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8" iface="eth0" netns="" May 17 00:22:23.094025 containerd[1455]: 2025-05-17 00:22:23.039 [INFO][5265] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8" May 17 00:22:23.094025 containerd[1455]: 2025-05-17 00:22:23.039 [INFO][5265] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8" May 17 00:22:23.094025 containerd[1455]: 2025-05-17 00:22:23.074 [INFO][5272] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8" HandleID="k8s-pod-network.6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8" Workload="172--233--222--125-k8s-calico--apiserver--77f86bc66d--bzk7k-eth0" May 17 00:22:23.094025 containerd[1455]: 2025-05-17 00:22:23.075 [INFO][5272] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:23.094025 containerd[1455]: 2025-05-17 00:22:23.075 [INFO][5272] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:23.094025 containerd[1455]: 2025-05-17 00:22:23.083 [WARNING][5272] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8" HandleID="k8s-pod-network.6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8" Workload="172--233--222--125-k8s-calico--apiserver--77f86bc66d--bzk7k-eth0" May 17 00:22:23.094025 containerd[1455]: 2025-05-17 00:22:23.083 [INFO][5272] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8" HandleID="k8s-pod-network.6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8" Workload="172--233--222--125-k8s-calico--apiserver--77f86bc66d--bzk7k-eth0" May 17 00:22:23.094025 containerd[1455]: 2025-05-17 00:22:23.084 [INFO][5272] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:23.094025 containerd[1455]: 2025-05-17 00:22:23.091 [INFO][5265] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8" May 17 00:22:23.094025 containerd[1455]: time="2025-05-17T00:22:23.093376290Z" level=info msg="TearDown network for sandbox \"6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8\" successfully" May 17 00:22:23.094025 containerd[1455]: time="2025-05-17T00:22:23.093404059Z" level=info msg="StopPodSandbox for \"6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8\" returns successfully" May 17 00:22:23.094025 containerd[1455]: time="2025-05-17T00:22:23.093679136Z" level=info msg="RemovePodSandbox for \"6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8\"" May 17 00:22:23.094025 containerd[1455]: time="2025-05-17T00:22:23.093694255Z" level=info msg="Forcibly stopping sandbox \"6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8\"" May 17 00:22:23.154739 containerd[1455]: 2025-05-17 00:22:23.121 [WARNING][5287] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--125-k8s-calico--apiserver--77f86bc66d--bzk7k-eth0", GenerateName:"calico-apiserver-77f86bc66d-", Namespace:"calico-apiserver", SelfLink:"", UID:"ecdb2099-9206-4f56-bd2f-4d5b7338559a", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77f86bc66d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-125", ContainerID:"ce3196d2bd97fb2be4fb22cbc585bf5f12f52e7809b24155890c71fa6c41dcec", Pod:"calico-apiserver-77f86bc66d-bzk7k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib7075275922", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:23.154739 containerd[1455]: 2025-05-17 00:22:23.121 [INFO][5287] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8" May 17 00:22:23.154739 containerd[1455]: 2025-05-17 00:22:23.121 [INFO][5287] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8" iface="eth0" netns="" May 17 00:22:23.154739 containerd[1455]: 2025-05-17 00:22:23.121 [INFO][5287] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8" May 17 00:22:23.154739 containerd[1455]: 2025-05-17 00:22:23.121 [INFO][5287] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8" May 17 00:22:23.154739 containerd[1455]: 2025-05-17 00:22:23.141 [INFO][5294] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8" HandleID="k8s-pod-network.6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8" Workload="172--233--222--125-k8s-calico--apiserver--77f86bc66d--bzk7k-eth0" May 17 00:22:23.154739 containerd[1455]: 2025-05-17 00:22:23.141 [INFO][5294] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:23.154739 containerd[1455]: 2025-05-17 00:22:23.141 [INFO][5294] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:23.154739 containerd[1455]: 2025-05-17 00:22:23.147 [WARNING][5294] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8" HandleID="k8s-pod-network.6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8" Workload="172--233--222--125-k8s-calico--apiserver--77f86bc66d--bzk7k-eth0" May 17 00:22:23.154739 containerd[1455]: 2025-05-17 00:22:23.147 [INFO][5294] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8" HandleID="k8s-pod-network.6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8" Workload="172--233--222--125-k8s-calico--apiserver--77f86bc66d--bzk7k-eth0" May 17 00:22:23.154739 containerd[1455]: 2025-05-17 00:22:23.150 [INFO][5294] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:23.154739 containerd[1455]: 2025-05-17 00:22:23.152 [INFO][5287] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8" May 17 00:22:23.155097 containerd[1455]: time="2025-05-17T00:22:23.154781326Z" level=info msg="TearDown network for sandbox \"6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8\" successfully" May 17 00:22:23.158715 containerd[1455]: time="2025-05-17T00:22:23.158685093Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:22:23.158770 containerd[1455]: time="2025-05-17T00:22:23.158733213Z" level=info msg="RemovePodSandbox \"6c13fd7e5fb2297473e1620db3dfeb3bef7cf43a5037cedc31d63b1c9410ccc8\" returns successfully" May 17 00:22:23.159165 containerd[1455]: time="2025-05-17T00:22:23.159140519Z" level=info msg="StopPodSandbox for \"a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4\"" May 17 00:22:23.220713 containerd[1455]: 2025-05-17 00:22:23.192 [WARNING][5309] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--125-k8s-coredns--668d6bf9bc--g4khv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"eab83464-1af3-4982-95f5-5c46d047a7e6", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-125", ContainerID:"0646bc98e70017e4df37b6cc37ddc319c782550060021023cb0a58dee5ef6df4", Pod:"coredns-668d6bf9bc-g4khv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.33.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3f4327cf7be", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:23.220713 containerd[1455]: 2025-05-17 00:22:23.192 [INFO][5309] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4" May 17 00:22:23.220713 containerd[1455]: 2025-05-17 00:22:23.192 [INFO][5309] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4" iface="eth0" netns="" May 17 00:22:23.220713 containerd[1455]: 2025-05-17 00:22:23.192 [INFO][5309] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4" May 17 00:22:23.220713 containerd[1455]: 2025-05-17 00:22:23.192 [INFO][5309] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4" May 17 00:22:23.220713 containerd[1455]: 2025-05-17 00:22:23.211 [INFO][5317] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4" HandleID="k8s-pod-network.a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4" Workload="172--233--222--125-k8s-coredns--668d6bf9bc--g4khv-eth0" May 17 00:22:23.220713 containerd[1455]: 2025-05-17 00:22:23.211 [INFO][5317] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:23.220713 containerd[1455]: 2025-05-17 00:22:23.211 [INFO][5317] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:23.220713 containerd[1455]: 2025-05-17 00:22:23.215 [WARNING][5317] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4" HandleID="k8s-pod-network.a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4" Workload="172--233--222--125-k8s-coredns--668d6bf9bc--g4khv-eth0" May 17 00:22:23.220713 containerd[1455]: 2025-05-17 00:22:23.216 [INFO][5317] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4" HandleID="k8s-pod-network.a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4" Workload="172--233--222--125-k8s-coredns--668d6bf9bc--g4khv-eth0" May 17 00:22:23.220713 containerd[1455]: 2025-05-17 00:22:23.217 [INFO][5317] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:23.220713 containerd[1455]: 2025-05-17 00:22:23.218 [INFO][5309] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4" May 17 00:22:23.221032 containerd[1455]: time="2025-05-17T00:22:23.220753783Z" level=info msg="TearDown network for sandbox \"a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4\" successfully" May 17 00:22:23.221032 containerd[1455]: time="2025-05-17T00:22:23.220778823Z" level=info msg="StopPodSandbox for \"a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4\" returns successfully" May 17 00:22:23.222200 containerd[1455]: time="2025-05-17T00:22:23.221402806Z" level=info msg="RemovePodSandbox for \"a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4\"" May 17 00:22:23.222200 containerd[1455]: time="2025-05-17T00:22:23.221447565Z" level=info msg="Forcibly stopping sandbox \"a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4\"" May 17 00:22:23.307139 containerd[1455]: 2025-05-17 00:22:23.256 [WARNING][5331] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--125-k8s-coredns--668d6bf9bc--g4khv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"eab83464-1af3-4982-95f5-5c46d047a7e6", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-125", ContainerID:"0646bc98e70017e4df37b6cc37ddc319c782550060021023cb0a58dee5ef6df4", Pod:"coredns-668d6bf9bc-g4khv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.33.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3f4327cf7be", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:23.307139 containerd[1455]: 2025-05-17 00:22:23.258 [INFO][5331] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4" May 17 00:22:23.307139 containerd[1455]: 2025-05-17 00:22:23.258 [INFO][5331] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4" iface="eth0" netns="" May 17 00:22:23.307139 containerd[1455]: 2025-05-17 00:22:23.258 [INFO][5331] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4" May 17 00:22:23.307139 containerd[1455]: 2025-05-17 00:22:23.258 [INFO][5331] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4" May 17 00:22:23.307139 containerd[1455]: 2025-05-17 00:22:23.286 [INFO][5339] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4" HandleID="k8s-pod-network.a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4" Workload="172--233--222--125-k8s-coredns--668d6bf9bc--g4khv-eth0" May 17 00:22:23.307139 containerd[1455]: 2025-05-17 00:22:23.287 [INFO][5339] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:23.307139 containerd[1455]: 2025-05-17 00:22:23.287 [INFO][5339] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:23.307139 containerd[1455]: 2025-05-17 00:22:23.301 [WARNING][5339] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4" HandleID="k8s-pod-network.a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4" Workload="172--233--222--125-k8s-coredns--668d6bf9bc--g4khv-eth0" May 17 00:22:23.307139 containerd[1455]: 2025-05-17 00:22:23.301 [INFO][5339] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4" HandleID="k8s-pod-network.a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4" Workload="172--233--222--125-k8s-coredns--668d6bf9bc--g4khv-eth0" May 17 00:22:23.307139 containerd[1455]: 2025-05-17 00:22:23.302 [INFO][5339] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:23.307139 containerd[1455]: 2025-05-17 00:22:23.304 [INFO][5331] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4" May 17 00:22:23.308245 containerd[1455]: time="2025-05-17T00:22:23.307552354Z" level=info msg="TearDown network for sandbox \"a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4\" successfully" May 17 00:22:23.310611 containerd[1455]: time="2025-05-17T00:22:23.310582552Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:22:23.310711 containerd[1455]: time="2025-05-17T00:22:23.310697481Z" level=info msg="RemovePodSandbox \"a760e089bcfd3cc3be1453bb54dc5167dded56f1caab96af0877898a794199d4\" returns successfully" May 17 00:22:23.311401 containerd[1455]: time="2025-05-17T00:22:23.311364314Z" level=info msg="StopPodSandbox for \"2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3\"" May 17 00:22:23.414213 containerd[1455]: 2025-05-17 00:22:23.384 [WARNING][5354] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--125-k8s-calico--kube--controllers--7998fc854--4sfsk-eth0", GenerateName:"calico-kube-controllers-7998fc854-", Namespace:"calico-system", SelfLink:"", UID:"66e07ccd-2dbe-42fe-bd10-e349fb811eb6", ResourceVersion:"1108", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7998fc854", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-125", ContainerID:"1430ef68df624e624de6fbf1f06f1d1de0f620a2b4f1da2ed731065d48dfc44c", Pod:"calico-kube-controllers-7998fc854-4sfsk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.33.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali71c6626e5e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:23.414213 containerd[1455]: 2025-05-17 00:22:23.385 [INFO][5354] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3" May 17 00:22:23.414213 containerd[1455]: 2025-05-17 00:22:23.385 [INFO][5354] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3" iface="eth0" netns="" May 17 00:22:23.414213 containerd[1455]: 2025-05-17 00:22:23.385 [INFO][5354] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3" May 17 00:22:23.414213 containerd[1455]: 2025-05-17 00:22:23.385 [INFO][5354] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3" May 17 00:22:23.414213 containerd[1455]: 2025-05-17 00:22:23.403 [INFO][5361] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3" HandleID="k8s-pod-network.2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3" Workload="172--233--222--125-k8s-calico--kube--controllers--7998fc854--4sfsk-eth0" May 17 00:22:23.414213 containerd[1455]: 2025-05-17 00:22:23.403 [INFO][5361] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:23.414213 containerd[1455]: 2025-05-17 00:22:23.403 [INFO][5361] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:23.414213 containerd[1455]: 2025-05-17 00:22:23.408 [WARNING][5361] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3" HandleID="k8s-pod-network.2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3" Workload="172--233--222--125-k8s-calico--kube--controllers--7998fc854--4sfsk-eth0" May 17 00:22:23.414213 containerd[1455]: 2025-05-17 00:22:23.408 [INFO][5361] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3" HandleID="k8s-pod-network.2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3" Workload="172--233--222--125-k8s-calico--kube--controllers--7998fc854--4sfsk-eth0" May 17 00:22:23.414213 containerd[1455]: 2025-05-17 00:22:23.409 [INFO][5361] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:23.414213 containerd[1455]: 2025-05-17 00:22:23.412 [INFO][5354] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3" May 17 00:22:23.414213 containerd[1455]: time="2025-05-17T00:22:23.414089883Z" level=info msg="TearDown network for sandbox \"2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3\" successfully" May 17 00:22:23.414213 containerd[1455]: time="2025-05-17T00:22:23.414119613Z" level=info msg="StopPodSandbox for \"2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3\" returns successfully" May 17 00:22:23.415528 containerd[1455]: time="2025-05-17T00:22:23.415251691Z" level=info msg="RemovePodSandbox for \"2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3\"" May 17 00:22:23.415528 containerd[1455]: time="2025-05-17T00:22:23.415293081Z" level=info msg="Forcibly stopping sandbox \"2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3\"" May 17 00:22:23.477282 containerd[1455]: 2025-05-17 00:22:23.444 [WARNING][5375] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--125-k8s-calico--kube--controllers--7998fc854--4sfsk-eth0", GenerateName:"calico-kube-controllers-7998fc854-", Namespace:"calico-system", SelfLink:"", UID:"66e07ccd-2dbe-42fe-bd10-e349fb811eb6", ResourceVersion:"1108", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7998fc854", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-125", ContainerID:"1430ef68df624e624de6fbf1f06f1d1de0f620a2b4f1da2ed731065d48dfc44c", Pod:"calico-kube-controllers-7998fc854-4sfsk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.33.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali71c6626e5e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:23.477282 containerd[1455]: 2025-05-17 00:22:23.444 [INFO][5375] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3" May 17 00:22:23.477282 containerd[1455]: 2025-05-17 00:22:23.444 [INFO][5375] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3" iface="eth0" netns="" May 17 00:22:23.477282 containerd[1455]: 2025-05-17 00:22:23.444 [INFO][5375] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3" May 17 00:22:23.477282 containerd[1455]: 2025-05-17 00:22:23.444 [INFO][5375] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3" May 17 00:22:23.477282 containerd[1455]: 2025-05-17 00:22:23.466 [INFO][5382] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3" HandleID="k8s-pod-network.2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3" Workload="172--233--222--125-k8s-calico--kube--controllers--7998fc854--4sfsk-eth0" May 17 00:22:23.477282 containerd[1455]: 2025-05-17 00:22:23.467 [INFO][5382] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:23.477282 containerd[1455]: 2025-05-17 00:22:23.467 [INFO][5382] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:23.477282 containerd[1455]: 2025-05-17 00:22:23.471 [WARNING][5382] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3" HandleID="k8s-pod-network.2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3" Workload="172--233--222--125-k8s-calico--kube--controllers--7998fc854--4sfsk-eth0" May 17 00:22:23.477282 containerd[1455]: 2025-05-17 00:22:23.471 [INFO][5382] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3" HandleID="k8s-pod-network.2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3" Workload="172--233--222--125-k8s-calico--kube--controllers--7998fc854--4sfsk-eth0" May 17 00:22:23.477282 containerd[1455]: 2025-05-17 00:22:23.472 [INFO][5382] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:23.477282 containerd[1455]: 2025-05-17 00:22:23.475 [INFO][5375] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3" May 17 00:22:23.477282 containerd[1455]: time="2025-05-17T00:22:23.477145722Z" level=info msg="TearDown network for sandbox \"2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3\" successfully" May 17 00:22:23.480934 containerd[1455]: time="2025-05-17T00:22:23.480651674Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:22:23.480934 containerd[1455]: time="2025-05-17T00:22:23.480764713Z" level=info msg="RemovePodSandbox \"2c2ca6e85e1abd93ee4f2569bfef730e2f7d7398f9f8781e6e099ef04121a3e3\" returns successfully" May 17 00:22:23.481515 containerd[1455]: time="2025-05-17T00:22:23.481492794Z" level=info msg="StopPodSandbox for \"438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2\"" May 17 00:22:23.547447 containerd[1455]: 2025-05-17 00:22:23.512 [WARNING][5397] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--125-k8s-csi--node--driver--mfhj5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e793e701-f5aa-4190-a1ec-13776ffa5239", ResourceVersion:"1103", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-125", ContainerID:"85120c538511467f4dac0433780ced366a18dce0f731c9beaac436a0a907d317", Pod:"csi-node-driver-mfhj5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.33.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif23f5b3db6a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:23.547447 containerd[1455]: 2025-05-17 00:22:23.512 [INFO][5397] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2" May 17 00:22:23.547447 containerd[1455]: 2025-05-17 00:22:23.512 [INFO][5397] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2" iface="eth0" netns="" May 17 00:22:23.547447 containerd[1455]: 2025-05-17 00:22:23.512 [INFO][5397] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2" May 17 00:22:23.547447 containerd[1455]: 2025-05-17 00:22:23.512 [INFO][5397] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2" May 17 00:22:23.547447 containerd[1455]: 2025-05-17 00:22:23.536 [INFO][5404] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2" HandleID="k8s-pod-network.438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2" Workload="172--233--222--125-k8s-csi--node--driver--mfhj5-eth0" May 17 00:22:23.547447 containerd[1455]: 2025-05-17 00:22:23.537 [INFO][5404] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:23.547447 containerd[1455]: 2025-05-17 00:22:23.537 [INFO][5404] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:23.547447 containerd[1455]: 2025-05-17 00:22:23.541 [WARNING][5404] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2" HandleID="k8s-pod-network.438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2" Workload="172--233--222--125-k8s-csi--node--driver--mfhj5-eth0" May 17 00:22:23.547447 containerd[1455]: 2025-05-17 00:22:23.542 [INFO][5404] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2" HandleID="k8s-pod-network.438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2" Workload="172--233--222--125-k8s-csi--node--driver--mfhj5-eth0" May 17 00:22:23.547447 containerd[1455]: 2025-05-17 00:22:23.543 [INFO][5404] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:23.547447 containerd[1455]: 2025-05-17 00:22:23.544 [INFO][5397] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2" May 17 00:22:23.547447 containerd[1455]: time="2025-05-17T00:22:23.547027037Z" level=info msg="TearDown network for sandbox \"438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2\" successfully" May 17 00:22:23.547447 containerd[1455]: time="2025-05-17T00:22:23.547048717Z" level=info msg="StopPodSandbox for \"438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2\" returns successfully" May 17 00:22:23.548766 containerd[1455]: time="2025-05-17T00:22:23.548661199Z" level=info msg="RemovePodSandbox for \"438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2\"" May 17 00:22:23.548766 containerd[1455]: time="2025-05-17T00:22:23.548686839Z" level=info msg="Forcibly stopping sandbox \"438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2\"" May 17 00:22:23.610123 containerd[1455]: 2025-05-17 00:22:23.579 [WARNING][5418] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--125-k8s-csi--node--driver--mfhj5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e793e701-f5aa-4190-a1ec-13776ffa5239", ResourceVersion:"1103", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-125", ContainerID:"85120c538511467f4dac0433780ced366a18dce0f731c9beaac436a0a907d317", Pod:"csi-node-driver-mfhj5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.33.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif23f5b3db6a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:23.610123 containerd[1455]: 2025-05-17 00:22:23.579 [INFO][5418] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2" May 17 00:22:23.610123 containerd[1455]: 2025-05-17 00:22:23.579 [INFO][5418] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2" iface="eth0" netns="" May 17 00:22:23.610123 containerd[1455]: 2025-05-17 00:22:23.579 [INFO][5418] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2" May 17 00:22:23.610123 containerd[1455]: 2025-05-17 00:22:23.579 [INFO][5418] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2" May 17 00:22:23.610123 containerd[1455]: 2025-05-17 00:22:23.598 [INFO][5425] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2" HandleID="k8s-pod-network.438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2" Workload="172--233--222--125-k8s-csi--node--driver--mfhj5-eth0" May 17 00:22:23.610123 containerd[1455]: 2025-05-17 00:22:23.599 [INFO][5425] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:23.610123 containerd[1455]: 2025-05-17 00:22:23.599 [INFO][5425] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:23.610123 containerd[1455]: 2025-05-17 00:22:23.603 [WARNING][5425] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2" HandleID="k8s-pod-network.438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2" Workload="172--233--222--125-k8s-csi--node--driver--mfhj5-eth0" May 17 00:22:23.610123 containerd[1455]: 2025-05-17 00:22:23.603 [INFO][5425] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2" HandleID="k8s-pod-network.438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2" Workload="172--233--222--125-k8s-csi--node--driver--mfhj5-eth0" May 17 00:22:23.610123 containerd[1455]: 2025-05-17 00:22:23.605 [INFO][5425] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:23.610123 containerd[1455]: 2025-05-17 00:22:23.608 [INFO][5418] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2" May 17 00:22:23.610123 containerd[1455]: time="2025-05-17T00:22:23.610070185Z" level=info msg="TearDown network for sandbox \"438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2\" successfully" May 17 00:22:23.614229 containerd[1455]: time="2025-05-17T00:22:23.614169851Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:22:23.614364 containerd[1455]: time="2025-05-17T00:22:23.614327359Z" level=info msg="RemovePodSandbox \"438d4c8493a774e4d848652d465d75558b3188b0850d04cba23c141205404eb2\" returns successfully" May 17 00:22:23.615375 containerd[1455]: time="2025-05-17T00:22:23.615227310Z" level=info msg="StopPodSandbox for \"985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46\"" May 17 00:22:23.684821 containerd[1455]: 2025-05-17 00:22:23.643 [WARNING][5440] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46" WorkloadEndpoint="172--233--222--125-k8s-whisker--77988f4665--6r7kc-eth0" May 17 00:22:23.684821 containerd[1455]: 2025-05-17 00:22:23.644 [INFO][5440] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46" May 17 00:22:23.684821 containerd[1455]: 2025-05-17 00:22:23.644 [INFO][5440] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46" iface="eth0" netns="" May 17 00:22:23.684821 containerd[1455]: 2025-05-17 00:22:23.644 [INFO][5440] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46" May 17 00:22:23.684821 containerd[1455]: 2025-05-17 00:22:23.644 [INFO][5440] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46" May 17 00:22:23.684821 containerd[1455]: 2025-05-17 00:22:23.671 [INFO][5449] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46" HandleID="k8s-pod-network.985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46" Workload="172--233--222--125-k8s-whisker--77988f4665--6r7kc-eth0" May 17 00:22:23.684821 containerd[1455]: 2025-05-17 00:22:23.671 [INFO][5449] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:23.684821 containerd[1455]: 2025-05-17 00:22:23.671 [INFO][5449] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:23.684821 containerd[1455]: 2025-05-17 00:22:23.678 [WARNING][5449] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46" HandleID="k8s-pod-network.985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46" Workload="172--233--222--125-k8s-whisker--77988f4665--6r7kc-eth0" May 17 00:22:23.684821 containerd[1455]: 2025-05-17 00:22:23.678 [INFO][5449] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46" HandleID="k8s-pod-network.985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46" Workload="172--233--222--125-k8s-whisker--77988f4665--6r7kc-eth0" May 17 00:22:23.684821 containerd[1455]: 2025-05-17 00:22:23.679 [INFO][5449] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:23.684821 containerd[1455]: 2025-05-17 00:22:23.682 [INFO][5440] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46" May 17 00:22:23.684821 containerd[1455]: time="2025-05-17T00:22:23.684631560Z" level=info msg="TearDown network for sandbox \"985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46\" successfully" May 17 00:22:23.684821 containerd[1455]: time="2025-05-17T00:22:23.684692199Z" level=info msg="StopPodSandbox for \"985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46\" returns successfully" May 17 00:22:23.686355 containerd[1455]: time="2025-05-17T00:22:23.686110983Z" level=info msg="RemovePodSandbox for \"985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46\"" May 17 00:22:23.686355 containerd[1455]: time="2025-05-17T00:22:23.686134313Z" level=info msg="Forcibly stopping sandbox \"985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46\"" May 17 00:22:23.741216 containerd[1455]: 2025-05-17 00:22:23.717 [WARNING][5463] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46" WorkloadEndpoint="172--233--222--125-k8s-whisker--77988f4665--6r7kc-eth0" May 17 00:22:23.741216 containerd[1455]: 2025-05-17 00:22:23.717 [INFO][5463] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46" May 17 00:22:23.741216 containerd[1455]: 2025-05-17 00:22:23.717 [INFO][5463] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46" iface="eth0" netns="" May 17 00:22:23.741216 containerd[1455]: 2025-05-17 00:22:23.717 [INFO][5463] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46" May 17 00:22:23.741216 containerd[1455]: 2025-05-17 00:22:23.717 [INFO][5463] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46" May 17 00:22:23.741216 containerd[1455]: 2025-05-17 00:22:23.732 [INFO][5470] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46" HandleID="k8s-pod-network.985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46" Workload="172--233--222--125-k8s-whisker--77988f4665--6r7kc-eth0" May 17 00:22:23.741216 containerd[1455]: 2025-05-17 00:22:23.732 [INFO][5470] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:23.741216 containerd[1455]: 2025-05-17 00:22:23.732 [INFO][5470] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:23.741216 containerd[1455]: 2025-05-17 00:22:23.736 [WARNING][5470] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46" HandleID="k8s-pod-network.985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46" Workload="172--233--222--125-k8s-whisker--77988f4665--6r7kc-eth0" May 17 00:22:23.741216 containerd[1455]: 2025-05-17 00:22:23.736 [INFO][5470] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46" HandleID="k8s-pod-network.985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46" Workload="172--233--222--125-k8s-whisker--77988f4665--6r7kc-eth0" May 17 00:22:23.741216 containerd[1455]: 2025-05-17 00:22:23.737 [INFO][5470] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:23.741216 containerd[1455]: 2025-05-17 00:22:23.739 [INFO][5463] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46" May 17 00:22:23.741474 containerd[1455]: time="2025-05-17T00:22:23.741277768Z" level=info msg="TearDown network for sandbox \"985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46\" successfully" May 17 00:22:23.745198 containerd[1455]: time="2025-05-17T00:22:23.744760110Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:22:23.745198 containerd[1455]: time="2025-05-17T00:22:23.744829079Z" level=info msg="RemovePodSandbox \"985a8b7f11abc56e1486e767d5ea02a5a4ebbada4ee1141822d6507b5b3cbd46\" returns successfully" May 17 00:22:23.745597 containerd[1455]: time="2025-05-17T00:22:23.745561001Z" level=info msg="StopPodSandbox for \"d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058\"" May 17 00:22:23.809012 containerd[1455]: 2025-05-17 00:22:23.777 [WARNING][5484] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--125-k8s-coredns--668d6bf9bc--q7w6q-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1ddd81ac-9fd2-4e37-83a9-b3a3b4011761", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-125", ContainerID:"759921b1f96ad776a013a7d07e55433ad459030f1652f84aa581756221eb5283", Pod:"coredns-668d6bf9bc-q7w6q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.33.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali104398033cc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:23.809012 containerd[1455]: 2025-05-17 00:22:23.777 [INFO][5484] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058" May 17 00:22:23.809012 containerd[1455]: 2025-05-17 00:22:23.777 [INFO][5484] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058" iface="eth0" netns="" May 17 00:22:23.809012 containerd[1455]: 2025-05-17 00:22:23.777 [INFO][5484] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058" May 17 00:22:23.809012 containerd[1455]: 2025-05-17 00:22:23.777 [INFO][5484] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058" May 17 00:22:23.809012 containerd[1455]: 2025-05-17 00:22:23.798 [INFO][5492] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058" HandleID="k8s-pod-network.d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058" Workload="172--233--222--125-k8s-coredns--668d6bf9bc--q7w6q-eth0" May 17 00:22:23.809012 containerd[1455]: 2025-05-17 00:22:23.798 [INFO][5492] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:23.809012 containerd[1455]: 2025-05-17 00:22:23.798 [INFO][5492] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:23.809012 containerd[1455]: 2025-05-17 00:22:23.802 [WARNING][5492] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058" HandleID="k8s-pod-network.d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058" Workload="172--233--222--125-k8s-coredns--668d6bf9bc--q7w6q-eth0" May 17 00:22:23.809012 containerd[1455]: 2025-05-17 00:22:23.802 [INFO][5492] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058" HandleID="k8s-pod-network.d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058" Workload="172--233--222--125-k8s-coredns--668d6bf9bc--q7w6q-eth0" May 17 00:22:23.809012 containerd[1455]: 2025-05-17 00:22:23.804 [INFO][5492] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:23.809012 containerd[1455]: 2025-05-17 00:22:23.807 [INFO][5484] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058" May 17 00:22:23.809395 containerd[1455]: time="2025-05-17T00:22:23.809052465Z" level=info msg="TearDown network for sandbox \"d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058\" successfully" May 17 00:22:23.809395 containerd[1455]: time="2025-05-17T00:22:23.809073535Z" level=info msg="StopPodSandbox for \"d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058\" returns successfully" May 17 00:22:23.809774 containerd[1455]: time="2025-05-17T00:22:23.809747697Z" level=info msg="RemovePodSandbox for \"d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058\"" May 17 00:22:23.809805 containerd[1455]: time="2025-05-17T00:22:23.809774987Z" level=info msg="Forcibly stopping sandbox \"d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058\"" May 17 00:22:23.865550 containerd[1455]: 2025-05-17 00:22:23.839 [WARNING][5506] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--125-k8s-coredns--668d6bf9bc--q7w6q-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1ddd81ac-9fd2-4e37-83a9-b3a3b4011761", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-125", ContainerID:"759921b1f96ad776a013a7d07e55433ad459030f1652f84aa581756221eb5283", Pod:"coredns-668d6bf9bc-q7w6q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.33.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali104398033cc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:23.865550 containerd[1455]: 2025-05-17 00:22:23.839 [INFO][5506] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058" May 17 00:22:23.865550 containerd[1455]: 2025-05-17 00:22:23.839 [INFO][5506] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058" iface="eth0" netns="" May 17 00:22:23.865550 containerd[1455]: 2025-05-17 00:22:23.839 [INFO][5506] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058" May 17 00:22:23.865550 containerd[1455]: 2025-05-17 00:22:23.839 [INFO][5506] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058" May 17 00:22:23.865550 containerd[1455]: 2025-05-17 00:22:23.857 [INFO][5513] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058" HandleID="k8s-pod-network.d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058" Workload="172--233--222--125-k8s-coredns--668d6bf9bc--q7w6q-eth0" May 17 00:22:23.865550 containerd[1455]: 2025-05-17 00:22:23.857 [INFO][5513] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:23.865550 containerd[1455]: 2025-05-17 00:22:23.857 [INFO][5513] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:23.865550 containerd[1455]: 2025-05-17 00:22:23.861 [WARNING][5513] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058" HandleID="k8s-pod-network.d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058" Workload="172--233--222--125-k8s-coredns--668d6bf9bc--q7w6q-eth0" May 17 00:22:23.865550 containerd[1455]: 2025-05-17 00:22:23.861 [INFO][5513] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058" HandleID="k8s-pod-network.d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058" Workload="172--233--222--125-k8s-coredns--668d6bf9bc--q7w6q-eth0" May 17 00:22:23.865550 containerd[1455]: 2025-05-17 00:22:23.862 [INFO][5513] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:23.865550 containerd[1455]: 2025-05-17 00:22:23.863 [INFO][5506] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058" May 17 00:22:23.866949 containerd[1455]: time="2025-05-17T00:22:23.866139118Z" level=info msg="TearDown network for sandbox \"d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058\" successfully" May 17 00:22:23.869261 containerd[1455]: time="2025-05-17T00:22:23.869213105Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:22:23.869261 containerd[1455]: time="2025-05-17T00:22:23.869256495Z" level=info msg="RemovePodSandbox \"d4eb94116e6aabfc8ef1cc385e8c19819d94b97ff934f0afc799bbd8414a8058\" returns successfully" May 17 00:22:23.869718 containerd[1455]: time="2025-05-17T00:22:23.869699439Z" level=info msg="StopPodSandbox for \"864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1\"" May 17 00:22:23.953169 containerd[1455]: 2025-05-17 00:22:23.902 [WARNING][5527] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--125-k8s-goldmane--78d55f7ddc--htsn7-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"4f18c687-4cb5-49f2-9647-374af2e4bff4", ResourceVersion:"1150", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-125", ContainerID:"26668b9ea6d0953c8ffd99491332534b989d8d0c696606fa84e19fda734013ea", Pod:"goldmane-78d55f7ddc-htsn7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.33.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali07c17a7dae9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:23.953169 containerd[1455]: 2025-05-17 00:22:23.902 [INFO][5527] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1" May 17 00:22:23.953169 containerd[1455]: 2025-05-17 00:22:23.902 [INFO][5527] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1" iface="eth0" netns="" May 17 00:22:23.953169 containerd[1455]: 2025-05-17 00:22:23.902 [INFO][5527] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1" May 17 00:22:23.953169 containerd[1455]: 2025-05-17 00:22:23.902 [INFO][5527] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1" May 17 00:22:23.953169 containerd[1455]: 2025-05-17 00:22:23.935 [INFO][5534] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1" HandleID="k8s-pod-network.864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1" Workload="172--233--222--125-k8s-goldmane--78d55f7ddc--htsn7-eth0" May 17 00:22:23.953169 containerd[1455]: 2025-05-17 00:22:23.936 [INFO][5534] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:23.953169 containerd[1455]: 2025-05-17 00:22:23.936 [INFO][5534] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:23.953169 containerd[1455]: 2025-05-17 00:22:23.946 [WARNING][5534] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1" HandleID="k8s-pod-network.864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1" Workload="172--233--222--125-k8s-goldmane--78d55f7ddc--htsn7-eth0" May 17 00:22:23.953169 containerd[1455]: 2025-05-17 00:22:23.946 [INFO][5534] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1" HandleID="k8s-pod-network.864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1" Workload="172--233--222--125-k8s-goldmane--78d55f7ddc--htsn7-eth0" May 17 00:22:23.953169 containerd[1455]: 2025-05-17 00:22:23.948 [INFO][5534] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:23.953169 containerd[1455]: 2025-05-17 00:22:23.950 [INFO][5527] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1" May 17 00:22:23.953169 containerd[1455]: time="2025-05-17T00:22:23.953153668Z" level=info msg="TearDown network for sandbox \"864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1\" successfully" May 17 00:22:23.956480 containerd[1455]: time="2025-05-17T00:22:23.953222857Z" level=info msg="StopPodSandbox for \"864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1\" returns successfully" May 17 00:22:23.956480 containerd[1455]: time="2025-05-17T00:22:23.953760581Z" level=info msg="RemovePodSandbox for \"864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1\"" May 17 00:22:23.956480 containerd[1455]: time="2025-05-17T00:22:23.953790640Z" level=info msg="Forcibly stopping sandbox \"864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1\"" May 17 00:22:24.046265 containerd[1455]: 2025-05-17 00:22:24.006 [WARNING][5548] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--222--125-k8s-goldmane--78d55f7ddc--htsn7-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"4f18c687-4cb5-49f2-9647-374af2e4bff4", ResourceVersion:"1150", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-222-125", ContainerID:"26668b9ea6d0953c8ffd99491332534b989d8d0c696606fa84e19fda734013ea", Pod:"goldmane-78d55f7ddc-htsn7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.33.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali07c17a7dae9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:22:24.046265 containerd[1455]: 2025-05-17 00:22:24.006 [INFO][5548] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1" May 17 00:22:24.046265 containerd[1455]: 2025-05-17 00:22:24.006 [INFO][5548] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1" iface="eth0" netns="" May 17 00:22:24.046265 containerd[1455]: 2025-05-17 00:22:24.006 [INFO][5548] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1" May 17 00:22:24.046265 containerd[1455]: 2025-05-17 00:22:24.006 [INFO][5548] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1" May 17 00:22:24.046265 containerd[1455]: 2025-05-17 00:22:24.032 [INFO][5555] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1" HandleID="k8s-pod-network.864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1" Workload="172--233--222--125-k8s-goldmane--78d55f7ddc--htsn7-eth0" May 17 00:22:24.046265 containerd[1455]: 2025-05-17 00:22:24.033 [INFO][5555] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:22:24.046265 containerd[1455]: 2025-05-17 00:22:24.033 [INFO][5555] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:22:24.046265 containerd[1455]: 2025-05-17 00:22:24.037 [WARNING][5555] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1" HandleID="k8s-pod-network.864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1" Workload="172--233--222--125-k8s-goldmane--78d55f7ddc--htsn7-eth0" May 17 00:22:24.046265 containerd[1455]: 2025-05-17 00:22:24.037 [INFO][5555] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1" HandleID="k8s-pod-network.864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1" Workload="172--233--222--125-k8s-goldmane--78d55f7ddc--htsn7-eth0" May 17 00:22:24.046265 containerd[1455]: 2025-05-17 00:22:24.038 [INFO][5555] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:22:24.046265 containerd[1455]: 2025-05-17 00:22:24.041 [INFO][5548] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1" May 17 00:22:24.046646 containerd[1455]: time="2025-05-17T00:22:24.046323600Z" level=info msg="TearDown network for sandbox \"864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1\" successfully" May 17 00:22:24.055122 containerd[1455]: time="2025-05-17T00:22:24.055087081Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:22:24.055203 containerd[1455]: time="2025-05-17T00:22:24.055135951Z" level=info msg="RemovePodSandbox \"864228074f87eb05795ae3b8fd10053b8f8bfc1797956910efa2962202d7d4d1\" returns successfully" May 17 00:22:33.842558 kubelet[2509]: E0517 00:22:33.842456 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-htsn7" podUID="4f18c687-4cb5-49f2-9647-374af2e4bff4" May 17 00:22:33.844060 containerd[1455]: time="2025-05-17T00:22:33.843544521Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:22:33.961232 containerd[1455]: time="2025-05-17T00:22:33.961027657Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:22:33.962231 containerd[1455]: time="2025-05-17T00:22:33.962114631Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:22:33.962231 containerd[1455]: time="2025-05-17T00:22:33.962194440Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:22:33.962861 kubelet[2509]: E0517 00:22:33.962454 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:22:33.962861 kubelet[2509]: E0517 00:22:33.962503 2509 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:22:33.962861 kubelet[2509]: E0517 00:22:33.962592 2509 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:839122e2a12b4271ae6fd9949780c33e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-96t5r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-86c8456b49-frszb_calico-system(7b75dfdd-c774-4c10-b431-7a20d6743288): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:22:33.965093 containerd[1455]: time="2025-05-17T00:22:33.965078823Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:22:34.064431 containerd[1455]: time="2025-05-17T00:22:34.064384238Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:22:34.065641 containerd[1455]: time="2025-05-17T00:22:34.065550481Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:22:34.065641 containerd[1455]: time="2025-05-17T00:22:34.065602451Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:22:34.066269 kubelet[2509]: E0517 00:22:34.065967 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:22:34.066269 kubelet[2509]: E0517 00:22:34.066015 2509 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:22:34.066269 kubelet[2509]: E0517 00:22:34.066108 2509 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-96t5r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-86c8456b49-frszb_calico-system(7b75dfdd-c774-4c10-b431-7a20d6743288): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:22:34.067372 kubelet[2509]: E0517 00:22:34.067282 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-86c8456b49-frszb" podUID="7b75dfdd-c774-4c10-b431-7a20d6743288" May 17 00:22:35.840390 kubelet[2509]: E0517 00:22:35.840350 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:22:39.840709 kubelet[2509]: E0517 00:22:39.840654 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:22:41.840799 kubelet[2509]: E0517 00:22:41.840708 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:22:46.844170 kubelet[2509]: E0517 00:22:46.844087 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-86c8456b49-frszb" podUID="7b75dfdd-c774-4c10-b431-7a20d6743288" May 17 00:22:48.841976 containerd[1455]: time="2025-05-17T00:22:48.841878774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:22:48.965214 containerd[1455]: time="2025-05-17T00:22:48.965124259Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:22:48.966560 containerd[1455]: time="2025-05-17T00:22:48.966526226Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:22:48.966671 containerd[1455]: time="2025-05-17T00:22:48.966588496Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:22:48.966847 kubelet[2509]: E0517 00:22:48.966785 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:22:48.966847 kubelet[2509]: E0517 00:22:48.966831 2509 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:22:48.968054 kubelet[2509]: E0517 00:22:48.966941 2509 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7rqcx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-htsn7_calico-system(4f18c687-4cb5-49f2-9647-374af2e4bff4): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:22:48.968420 kubelet[2509]: E0517 00:22:48.968348 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-htsn7" podUID="4f18c687-4cb5-49f2-9647-374af2e4bff4" May 17 00:22:51.986405 systemd[1]: run-containerd-runc-k8s.io-57a96ffc04bfbeb811ded530c0e598685cd5f7415b8a524bf25f7039bdb7b961-runc.t9CEum.mount: Deactivated successfully. May 17 00:22:58.844282 kubelet[2509]: E0517 00:22:58.843850 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:22:59.841918 kubelet[2509]: E0517 00:22:59.841816 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-86c8456b49-frszb" podUID="7b75dfdd-c774-4c10-b431-7a20d6743288" May 17 00:23:02.845100 kubelet[2509]: E0517 00:23:02.845013 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-htsn7" podUID="4f18c687-4cb5-49f2-9647-374af2e4bff4" May 17 00:23:12.841225 kubelet[2509]: E0517 00:23:12.840767 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:23:12.843643 kubelet[2509]: E0517 00:23:12.843612 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-86c8456b49-frszb" podUID="7b75dfdd-c774-4c10-b431-7a20d6743288" May 17 00:23:16.840824 kubelet[2509]: E0517 00:23:16.840466 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:23:17.841081 kubelet[2509]: E0517 00:23:17.841017 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-htsn7" podUID="4f18c687-4cb5-49f2-9647-374af2e4bff4" May 17 00:23:22.841508 kubelet[2509]: E0517 00:23:22.840965 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:23:25.604276 systemd[1]: Started sshd@7-172.233.222.125:22-139.178.89.65:41532.service - OpenSSH per-connection server daemon (139.178.89.65:41532). May 17 00:23:25.843233 containerd[1455]: time="2025-05-17T00:23:25.843143839Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:23:25.931225 sshd[5691]: Accepted publickey for core from 139.178.89.65 port 41532 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:23:25.938032 sshd[5691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:25.944843 systemd-logind[1437]: New session 8 of user core. May 17 00:23:25.950294 systemd[1]: Started session-8.scope - Session 8 of User core. May 17 00:23:26.201264 containerd[1455]: time="2025-05-17T00:23:26.200961363Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:23:26.202914 containerd[1455]: time="2025-05-17T00:23:26.202158596Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:23:26.202914 containerd[1455]: time="2025-05-17T00:23:26.202237484Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:23:26.203019 kubelet[2509]: E0517 00:23:26.202551 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:23:26.203019 kubelet[2509]: E0517 00:23:26.202603 2509 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:23:26.203019 kubelet[2509]: E0517 00:23:26.202711 2509 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:839122e2a12b4271ae6fd9949780c33e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-96t5r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-86c8456b49-frszb_calico-system(7b75dfdd-c774-4c10-b431-7a20d6743288): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:23:26.205485 containerd[1455]: time="2025-05-17T00:23:26.205443437Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:23:26.261931 sshd[5691]: pam_unix(sshd:session): session closed for user core May 17 00:23:26.267248 systemd[1]: sshd@7-172.233.222.125:22-139.178.89.65:41532.service: Deactivated successfully. May 17 00:23:26.270708 systemd[1]: session-8.scope: Deactivated successfully. May 17 00:23:26.272547 systemd-logind[1437]: Session 8 logged out. Waiting for processes to exit. May 17 00:23:26.273735 systemd-logind[1437]: Removed session 8. May 17 00:23:26.327864 containerd[1455]: time="2025-05-17T00:23:26.327827153Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:23:26.328663 containerd[1455]: time="2025-05-17T00:23:26.328634838Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:23:26.328858 containerd[1455]: time="2025-05-17T00:23:26.328713866Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:23:26.328940 kubelet[2509]: E0517 00:23:26.328891 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:23:26.329038 kubelet[2509]: E0517 00:23:26.328950 2509 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:23:26.329345 kubelet[2509]: E0517 00:23:26.329060 2509 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-96t5r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-86c8456b49-frszb_calico-system(7b75dfdd-c774-4c10-b431-7a20d6743288): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:23:26.330651 kubelet[2509]: E0517 00:23:26.330592 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-86c8456b49-frszb" podUID="7b75dfdd-c774-4c10-b431-7a20d6743288" May 17 00:23:29.841682 containerd[1455]: time="2025-05-17T00:23:29.841479763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:23:29.941128 containerd[1455]: time="2025-05-17T00:23:29.941091667Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:23:29.941915 containerd[1455]: time="2025-05-17T00:23:29.941894665Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:23:29.942059 containerd[1455]: time="2025-05-17T00:23:29.941948103Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:23:29.942100 kubelet[2509]: E0517 00:23:29.942070 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:23:29.942522 kubelet[2509]: E0517 00:23:29.942102 2509 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:23:29.942522 kubelet[2509]: E0517 00:23:29.942214 2509 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7rqcx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-htsn7_calico-system(4f18c687-4cb5-49f2-9647-374af2e4bff4): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:23:29.943620 kubelet[2509]: E0517 00:23:29.943586 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-htsn7" podUID="4f18c687-4cb5-49f2-9647-374af2e4bff4" May 17 00:23:31.325433 systemd[1]: Started sshd@8-172.233.222.125:22-139.178.89.65:59108.service - OpenSSH per-connection server daemon (139.178.89.65:59108). May 17 00:23:31.639992 sshd[5715]: Accepted publickey for core from 139.178.89.65 port 59108 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:23:31.641487 sshd[5715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:31.645763 systemd-logind[1437]: New session 9 of user core. May 17 00:23:31.651282 systemd[1]: Started session-9.scope - Session 9 of User core. May 17 00:23:31.943377 sshd[5715]: pam_unix(sshd:session): session closed for user core May 17 00:23:31.947287 systemd[1]: sshd@8-172.233.222.125:22-139.178.89.65:59108.service: Deactivated successfully. May 17 00:23:31.949890 systemd[1]: session-9.scope: Deactivated successfully. May 17 00:23:31.953461 systemd-logind[1437]: Session 9 logged out. Waiting for processes to exit. May 17 00:23:31.955230 systemd-logind[1437]: Removed session 9. May 17 00:23:36.999281 systemd[1]: Started sshd@9-172.233.222.125:22-139.178.89.65:60176.service - OpenSSH per-connection server daemon (139.178.89.65:60176). May 17 00:23:37.317423 sshd[5746]: Accepted publickey for core from 139.178.89.65 port 60176 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:23:37.319097 sshd[5746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:37.324487 systemd-logind[1437]: New session 10 of user core. May 17 00:23:37.331402 systemd[1]: Started session-10.scope - Session 10 of User core. May 17 00:23:37.612077 sshd[5746]: pam_unix(sshd:session): session closed for user core May 17 00:23:37.616656 systemd[1]: sshd@9-172.233.222.125:22-139.178.89.65:60176.service: Deactivated successfully. May 17 00:23:37.618536 systemd[1]: session-10.scope: Deactivated successfully. May 17 00:23:37.619407 systemd-logind[1437]: Session 10 logged out. Waiting for processes to exit. May 17 00:23:37.620276 systemd-logind[1437]: Removed session 10. May 17 00:23:37.670342 systemd[1]: Started sshd@10-172.233.222.125:22-139.178.89.65:60188.service - OpenSSH per-connection server daemon (139.178.89.65:60188). May 17 00:23:37.988874 sshd[5779]: Accepted publickey for core from 139.178.89.65 port 60188 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:23:37.990595 sshd[5779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:37.995765 systemd-logind[1437]: New session 11 of user core. May 17 00:23:38.003325 systemd[1]: Started session-11.scope - Session 11 of User core. May 17 00:23:38.308234 sshd[5779]: pam_unix(sshd:session): session closed for user core May 17 00:23:38.311692 systemd-logind[1437]: Session 11 logged out. Waiting for processes to exit. May 17 00:23:38.314648 systemd[1]: sshd@10-172.233.222.125:22-139.178.89.65:60188.service: Deactivated successfully. May 17 00:23:38.316657 systemd[1]: session-11.scope: Deactivated successfully. May 17 00:23:38.320353 systemd-logind[1437]: Removed session 11. May 17 00:23:38.369364 systemd[1]: Started sshd@11-172.233.222.125:22-139.178.89.65:60204.service - OpenSSH per-connection server daemon (139.178.89.65:60204). May 17 00:23:38.685275 sshd[5790]: Accepted publickey for core from 139.178.89.65 port 60204 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:23:38.685954 sshd[5790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:38.691828 systemd-logind[1437]: New session 12 of user core. May 17 00:23:38.694309 systemd[1]: Started session-12.scope - Session 12 of User core. May 17 00:23:38.973056 sshd[5790]: pam_unix(sshd:session): session closed for user core May 17 00:23:38.976736 systemd[1]: sshd@11-172.233.222.125:22-139.178.89.65:60204.service: Deactivated successfully. May 17 00:23:38.978834 systemd[1]: session-12.scope: Deactivated successfully. May 17 00:23:38.980042 systemd-logind[1437]: Session 12 logged out. Waiting for processes to exit. May 17 00:23:38.981441 systemd-logind[1437]: Removed session 12. May 17 00:23:40.842647 kubelet[2509]: E0517 00:23:40.842131 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-htsn7" podUID="4f18c687-4cb5-49f2-9647-374af2e4bff4" May 17 00:23:40.843039 kubelet[2509]: E0517 00:23:40.842945 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-86c8456b49-frszb" podUID="7b75dfdd-c774-4c10-b431-7a20d6743288" May 17 00:23:44.047454 systemd[1]: Started sshd@12-172.233.222.125:22-139.178.89.65:60214.service - OpenSSH per-connection server daemon (139.178.89.65:60214). May 17 00:23:44.378070 sshd[5803]: Accepted publickey for core from 139.178.89.65 port 60214 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:23:44.379920 sshd[5803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:44.384631 systemd-logind[1437]: New session 13 of user core. May 17 00:23:44.391430 systemd[1]: Started session-13.scope - Session 13 of User core. May 17 00:23:44.673620 sshd[5803]: pam_unix(sshd:session): session closed for user core May 17 00:23:44.676970 systemd[1]: sshd@12-172.233.222.125:22-139.178.89.65:60214.service: Deactivated successfully. May 17 00:23:44.679044 systemd[1]: session-13.scope: Deactivated successfully. May 17 00:23:44.680798 systemd-logind[1437]: Session 13 logged out. Waiting for processes to exit. May 17 00:23:44.681604 systemd-logind[1437]: Removed session 13. May 17 00:23:44.733453 systemd[1]: Started sshd@13-172.233.222.125:22-139.178.89.65:60220.service - OpenSSH per-connection server daemon (139.178.89.65:60220). May 17 00:23:45.051533 sshd[5816]: Accepted publickey for core from 139.178.89.65 port 60220 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:23:45.053221 sshd[5816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:45.057847 systemd-logind[1437]: New session 14 of user core. May 17 00:23:45.066348 systemd[1]: Started session-14.scope - Session 14 of User core. May 17 00:23:45.484352 sshd[5816]: pam_unix(sshd:session): session closed for user core May 17 00:23:45.490066 systemd[1]: sshd@13-172.233.222.125:22-139.178.89.65:60220.service: Deactivated successfully. May 17 00:23:45.492598 systemd[1]: session-14.scope: Deactivated successfully. May 17 00:23:45.495244 systemd-logind[1437]: Session 14 logged out. Waiting for processes to exit. May 17 00:23:45.496749 systemd-logind[1437]: Removed session 14. May 17 00:23:45.541366 systemd[1]: Started sshd@14-172.233.222.125:22-139.178.89.65:60224.service - OpenSSH per-connection server daemon (139.178.89.65:60224). May 17 00:23:45.875211 sshd[5828]: Accepted publickey for core from 139.178.89.65 port 60224 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:23:45.876681 sshd[5828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:45.882879 systemd-logind[1437]: New session 15 of user core. May 17 00:23:45.888344 systemd[1]: Started session-15.scope - Session 15 of User core. May 17 00:23:46.964069 sshd[5828]: pam_unix(sshd:session): session closed for user core May 17 00:23:46.969759 systemd-logind[1437]: Session 15 logged out. Waiting for processes to exit. May 17 00:23:46.970763 systemd[1]: sshd@14-172.233.222.125:22-139.178.89.65:60224.service: Deactivated successfully. May 17 00:23:46.973242 systemd[1]: session-15.scope: Deactivated successfully. May 17 00:23:46.974446 systemd-logind[1437]: Removed session 15. May 17 00:23:47.029417 systemd[1]: Started sshd@15-172.233.222.125:22-139.178.89.65:40630.service - OpenSSH per-connection server daemon (139.178.89.65:40630). May 17 00:23:47.363802 sshd[5846]: Accepted publickey for core from 139.178.89.65 port 40630 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:23:47.368230 sshd[5846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:47.373496 systemd-logind[1437]: New session 16 of user core. May 17 00:23:47.378322 systemd[1]: Started session-16.scope - Session 16 of User core. May 17 00:23:47.781641 sshd[5846]: pam_unix(sshd:session): session closed for user core May 17 00:23:47.786667 systemd-logind[1437]: Session 16 logged out. Waiting for processes to exit. May 17 00:23:47.787670 systemd[1]: sshd@15-172.233.222.125:22-139.178.89.65:40630.service: Deactivated successfully. May 17 00:23:47.791399 systemd[1]: session-16.scope: Deactivated successfully. May 17 00:23:47.793322 systemd-logind[1437]: Removed session 16. May 17 00:23:47.839616 systemd[1]: Started sshd@16-172.233.222.125:22-139.178.89.65:40640.service - OpenSSH per-connection server daemon (139.178.89.65:40640). May 17 00:23:48.187470 sshd[5857]: Accepted publickey for core from 139.178.89.65 port 40640 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:23:48.189708 sshd[5857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:48.195236 systemd-logind[1437]: New session 17 of user core. May 17 00:23:48.202247 systemd[1]: Started session-17.scope - Session 17 of User core. May 17 00:23:48.481533 sshd[5857]: pam_unix(sshd:session): session closed for user core May 17 00:23:48.485283 systemd-logind[1437]: Session 17 logged out. Waiting for processes to exit. May 17 00:23:48.485563 systemd[1]: sshd@16-172.233.222.125:22-139.178.89.65:40640.service: Deactivated successfully. May 17 00:23:48.487042 systemd[1]: session-17.scope: Deactivated successfully. May 17 00:23:48.488050 systemd-logind[1437]: Removed session 17. May 17 00:23:50.841994 kubelet[2509]: E0517 00:23:50.841956 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:23:51.847523 kubelet[2509]: E0517 00:23:51.847442 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-86c8456b49-frszb" podUID="7b75dfdd-c774-4c10-b431-7a20d6743288" May 17 00:23:51.996743 systemd[1]: run-containerd-runc-k8s.io-57a96ffc04bfbeb811ded530c0e598685cd5f7415b8a524bf25f7039bdb7b961-runc.l6CB2A.mount: Deactivated successfully. May 17 00:23:52.844972 kubelet[2509]: E0517 00:23:52.844614 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-htsn7" podUID="4f18c687-4cb5-49f2-9647-374af2e4bff4" May 17 00:23:53.538698 systemd[1]: Started sshd@17-172.233.222.125:22-139.178.89.65:40656.service - OpenSSH per-connection server daemon (139.178.89.65:40656). May 17 00:23:53.865119 sshd[5895]: Accepted publickey for core from 139.178.89.65 port 40656 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:23:53.866159 sshd[5895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:53.869587 systemd-logind[1437]: New session 18 of user core. May 17 00:23:53.876280 systemd[1]: Started session-18.scope - Session 18 of User core. May 17 00:23:54.150050 sshd[5895]: pam_unix(sshd:session): session closed for user core May 17 00:23:54.154962 systemd[1]: sshd@17-172.233.222.125:22-139.178.89.65:40656.service: Deactivated successfully. May 17 00:23:54.157912 systemd[1]: session-18.scope: Deactivated successfully. May 17 00:23:54.159031 systemd-logind[1437]: Session 18 logged out. Waiting for processes to exit. May 17 00:23:54.160215 systemd-logind[1437]: Removed session 18. May 17 00:23:54.842540 kubelet[2509]: E0517 00:23:54.841357 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:23:59.215270 systemd[1]: Started sshd@18-172.233.222.125:22-139.178.89.65:58428.service - OpenSSH per-connection server daemon (139.178.89.65:58428). May 17 00:23:59.549974 sshd[5908]: Accepted publickey for core from 139.178.89.65 port 58428 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:23:59.551601 sshd[5908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:59.556627 systemd-logind[1437]: New session 19 of user core. May 17 00:23:59.561300 systemd[1]: Started session-19.scope - Session 19 of User core. May 17 00:23:59.880392 sshd[5908]: pam_unix(sshd:session): session closed for user core May 17 00:23:59.883840 systemd-logind[1437]: Session 19 logged out. Waiting for processes to exit. May 17 00:23:59.886105 systemd[1]: sshd@18-172.233.222.125:22-139.178.89.65:58428.service: Deactivated successfully. May 17 00:23:59.889678 systemd[1]: session-19.scope: Deactivated successfully. May 17 00:23:59.890924 systemd-logind[1437]: Removed session 19. May 17 00:24:03.841880 kubelet[2509]: E0517 00:24:03.841409 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-htsn7" podUID="4f18c687-4cb5-49f2-9647-374af2e4bff4" May 17 00:24:04.948389 systemd[1]: Started sshd@19-172.233.222.125:22-139.178.89.65:58442.service - OpenSSH per-connection server daemon (139.178.89.65:58442). May 17 00:24:05.280028 sshd[5942]: Accepted publickey for core from 139.178.89.65 port 58442 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:24:05.281759 sshd[5942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:05.287388 systemd-logind[1437]: New session 20 of user core. May 17 00:24:05.292303 systemd[1]: Started session-20.scope - Session 20 of User core. May 17 00:24:05.570633 sshd[5942]: pam_unix(sshd:session): session closed for user core May 17 00:24:05.575908 systemd[1]: sshd@19-172.233.222.125:22-139.178.89.65:58442.service: Deactivated successfully. May 17 00:24:05.578812 systemd[1]: session-20.scope: Deactivated successfully. May 17 00:24:05.579604 systemd-logind[1437]: Session 20 logged out. Waiting for processes to exit. May 17 00:24:05.581330 systemd-logind[1437]: Removed session 20. May 17 00:24:05.843367 kubelet[2509]: E0517 00:24:05.843227 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-86c8456b49-frszb" podUID="7b75dfdd-c774-4c10-b431-7a20d6743288" May 17 00:24:08.841486 kubelet[2509]: E0517 00:24:08.840494 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:24:10.630278 systemd[1]: Started sshd@20-172.233.222.125:22-139.178.89.65:36948.service - OpenSSH per-connection server daemon (139.178.89.65:36948). May 17 00:24:10.959530 sshd[5974]: Accepted publickey for core from 139.178.89.65 port 36948 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:24:10.961516 sshd[5974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:10.966540 systemd-logind[1437]: New session 21 of user core. May 17 00:24:10.971296 systemd[1]: Started session-21.scope - Session 21 of User core. May 17 00:24:11.264904 sshd[5974]: pam_unix(sshd:session): session closed for user core May 17 00:24:11.270152 systemd[1]: sshd@20-172.233.222.125:22-139.178.89.65:36948.service: Deactivated successfully. May 17 00:24:11.272105 systemd[1]: session-21.scope: Deactivated successfully. May 17 00:24:11.272733 systemd-logind[1437]: Session 21 logged out. Waiting for processes to exit. May 17 00:24:11.273739 systemd-logind[1437]: Removed session 21. May 17 00:24:16.324343 systemd[1]: Started sshd@21-172.233.222.125:22-139.178.89.65:36962.service - OpenSSH per-connection server daemon (139.178.89.65:36962). May 17 00:24:16.651157 sshd[5987]: Accepted publickey for core from 139.178.89.65 port 36962 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:24:16.652631 sshd[5987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:16.657555 systemd-logind[1437]: New session 22 of user core. May 17 00:24:16.664291 systemd[1]: Started session-22.scope - Session 22 of User core. May 17 00:24:16.843935 kubelet[2509]: E0517 00:24:16.843752 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-86c8456b49-frszb" podUID="7b75dfdd-c774-4c10-b431-7a20d6743288" May 17 00:24:16.844960 kubelet[2509]: E0517 00:24:16.844413 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-htsn7" podUID="4f18c687-4cb5-49f2-9647-374af2e4bff4" May 17 00:24:16.939113 sshd[5987]: pam_unix(sshd:session): session closed for user core May 17 00:24:16.943318 systemd[1]: sshd@21-172.233.222.125:22-139.178.89.65:36962.service: Deactivated successfully. May 17 00:24:16.945814 systemd[1]: session-22.scope: Deactivated successfully. May 17 00:24:16.946409 systemd-logind[1437]: Session 22 logged out. Waiting for processes to exit. May 17 00:24:16.947272 systemd-logind[1437]: Removed session 22. May 17 00:24:21.840790 kubelet[2509]: E0517 00:24:21.840744 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 17 00:24:22.005327 systemd[1]: Started sshd@22-172.233.222.125:22-139.178.89.65:39904.service - OpenSSH per-connection server daemon (139.178.89.65:39904). May 17 00:24:22.332285 sshd[6015]: Accepted publickey for core from 139.178.89.65 port 39904 ssh2: RSA SHA256:ULv753Dw0eEN/dfWF5UBn/lS3aFHHXDQmJpDZs4I434 May 17 00:24:22.334944 sshd[6015]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:24:22.339833 systemd-logind[1437]: New session 23 of user core. May 17 00:24:22.345321 systemd[1]: Started session-23.scope - Session 23 of User core. May 17 00:24:22.645647 sshd[6015]: pam_unix(sshd:session): session closed for user core May 17 00:24:22.650258 systemd[1]: sshd@22-172.233.222.125:22-139.178.89.65:39904.service: Deactivated successfully. May 17 00:24:22.660810 systemd[1]: session-23.scope: Deactivated successfully. May 17 00:24:22.662982 systemd-logind[1437]: Session 23 logged out. Waiting for processes to exit. May 17 00:24:22.664421 systemd-logind[1437]: Removed session 23. May 17 00:24:23.840934 kubelet[2509]: E0517 00:24:23.840873 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20"