May 27 03:32:28.834117 kernel: Linux version 6.12.30-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 27 01:09:43 -00 2025 May 27 03:32:28.834137 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f6c186658a19d5a08471ef76df75f82494b37b46908f9237b2c3cf497da860c6 May 27 03:32:28.834151 kernel: BIOS-provided physical RAM map: May 27 03:32:28.834158 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable May 27 03:32:28.834164 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved May 27 03:32:28.834171 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable May 27 03:32:28.834178 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved May 27 03:32:28.834185 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable May 27 03:32:28.834191 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved May 27 03:32:28.834197 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data May 27 03:32:28.834204 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS May 27 03:32:28.834213 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable May 27 03:32:28.834219 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved May 27 03:32:28.834226 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS May 27 03:32:28.834233 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable May 27 03:32:28.834240 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved May 27 03:32:28.834249 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 27 03:32:28.834256 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 27 03:32:28.834263 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 27 03:32:28.834270 kernel: NX (Execute Disable) protection: active May 27 03:32:28.834276 kernel: APIC: Static calls initialized May 27 03:32:28.834283 kernel: e820: update [mem 0x9a13f018-0x9a148c57] usable ==> usable May 27 03:32:28.834291 kernel: e820: update [mem 0x9a102018-0x9a13ee57] usable ==> usable May 27 03:32:28.834297 kernel: extended physical RAM map: May 27 03:32:28.834304 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable May 27 03:32:28.834311 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved May 27 03:32:28.834318 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable May 27 03:32:28.834327 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved May 27 03:32:28.834334 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a102017] usable May 27 03:32:28.834341 kernel: reserve setup_data: [mem 0x000000009a102018-0x000000009a13ee57] usable May 27 03:32:28.834351 kernel: reserve setup_data: [mem 0x000000009a13ee58-0x000000009a13f017] usable May 27 03:32:28.834358 kernel: reserve setup_data: [mem 0x000000009a13f018-0x000000009a148c57] usable May 27 03:32:28.834365 kernel: reserve setup_data: [mem 0x000000009a148c58-0x000000009b8ecfff] usable May 27 03:32:28.834372 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved May 27 03:32:28.834379 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data May 27 03:32:28.834386 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS May 27 03:32:28.834392 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable May 27 03:32:28.834399 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved May 27 03:32:28.834408 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS May 27 03:32:28.834415 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable May 27 03:32:28.834426 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved May 27 03:32:28.834433 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 27 03:32:28.834440 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 27 03:32:28.834447 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 27 03:32:28.834456 kernel: efi: EFI v2.7 by EDK II May 27 03:32:28.834463 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1af018 RNG=0x9bb73018 May 27 03:32:28.834471 kernel: random: crng init done May 27 03:32:28.834478 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 May 27 03:32:28.834485 kernel: secureboot: Secure boot enabled May 27 03:32:28.834492 kernel: SMBIOS 2.8 present. May 27 03:32:28.834499 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 May 27 03:32:28.834506 kernel: DMI: Memory slots populated: 1/1 May 27 03:32:28.834513 kernel: Hypervisor detected: KVM May 27 03:32:28.834520 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 27 03:32:28.834528 kernel: kvm-clock: using sched offset of 4814995626 cycles May 27 03:32:28.834537 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 27 03:32:28.834545 kernel: tsc: Detected 2794.748 MHz processor May 27 03:32:28.834553 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 27 03:32:28.834560 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 27 03:32:28.834567 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 May 27 03:32:28.834575 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 27 03:32:28.834582 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 27 03:32:28.834589 kernel: Using GB pages for direct mapping May 27 03:32:28.834597 kernel: ACPI: Early table checksum verification disabled May 27 03:32:28.834606 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) May 27 03:32:28.834614 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 27 03:32:28.834621 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:32:28.834629 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:32:28.834636 kernel: ACPI: FACS 0x000000009BBDD000 000040 May 27 03:32:28.834652 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:32:28.834659 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:32:28.834667 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:32:28.834674 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:32:28.834683 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 27 03:32:28.834691 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] May 27 03:32:28.834699 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] May 27 03:32:28.834706 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] May 27 03:32:28.834716 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] May 27 03:32:28.834724 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] May 27 03:32:28.834731 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] May 27 03:32:28.834738 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] May 27 03:32:28.834745 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] May 27 03:32:28.834755 kernel: No NUMA configuration found May 27 03:32:28.834762 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] May 27 03:32:28.834770 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] May 27 03:32:28.834777 kernel: Zone ranges: May 27 03:32:28.834785 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 27 03:32:28.834792 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] May 27 03:32:28.834799 kernel: Normal empty May 27 03:32:28.834806 kernel: Device empty May 27 03:32:28.834814 kernel: Movable zone start for each node May 27 03:32:28.834823 kernel: Early memory node ranges May 27 03:32:28.834831 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] May 27 03:32:28.834838 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] May 27 03:32:28.834845 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] May 27 03:32:28.834853 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] May 27 03:32:28.834860 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] May 27 03:32:28.834867 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] May 27 03:32:28.834874 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 27 03:32:28.834882 kernel: On node 0, zone DMA: 32 pages in unavailable ranges May 27 03:32:28.834889 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 27 03:32:28.834898 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges May 27 03:32:28.834905 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges May 27 03:32:28.834913 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges May 27 03:32:28.834920 kernel: ACPI: PM-Timer IO Port: 0x608 May 27 03:32:28.834927 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 27 03:32:28.834935 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 27 03:32:28.834942 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 27 03:32:28.836013 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 27 03:32:28.836023 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 27 03:32:28.836035 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 27 03:32:28.836043 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 27 03:32:28.836050 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 27 03:32:28.836058 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 27 03:32:28.836065 kernel: TSC deadline timer available May 27 03:32:28.836072 kernel: CPU topo: Max. logical packages: 1 May 27 03:32:28.836080 kernel: CPU topo: Max. logical dies: 1 May 27 03:32:28.836089 kernel: CPU topo: Max. dies per package: 1 May 27 03:32:28.836103 kernel: CPU topo: Max. threads per core: 1 May 27 03:32:28.836110 kernel: CPU topo: Num. cores per package: 4 May 27 03:32:28.836118 kernel: CPU topo: Num. threads per package: 4 May 27 03:32:28.836125 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs May 27 03:32:28.836135 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 27 03:32:28.836143 kernel: kvm-guest: KVM setup pv remote TLB flush May 27 03:32:28.836150 kernel: kvm-guest: setup PV sched yield May 27 03:32:28.836158 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices May 27 03:32:28.836165 kernel: Booting paravirtualized kernel on KVM May 27 03:32:28.836175 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 27 03:32:28.836183 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 27 03:32:28.836191 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 May 27 03:32:28.836198 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 May 27 03:32:28.836222 kernel: pcpu-alloc: [0] 0 1 2 3 May 27 03:32:28.836238 kernel: kvm-guest: PV spinlocks enabled May 27 03:32:28.836246 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 27 03:32:28.836255 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f6c186658a19d5a08471ef76df75f82494b37b46908f9237b2c3cf497da860c6 May 27 03:32:28.836265 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 27 03:32:28.836273 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 27 03:32:28.836281 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 27 03:32:28.836288 kernel: Fallback order for Node 0: 0 May 27 03:32:28.836300 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 May 27 03:32:28.836307 kernel: Policy zone: DMA32 May 27 03:32:28.836315 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 27 03:32:28.836323 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 27 03:32:28.836330 kernel: ftrace: allocating 40081 entries in 157 pages May 27 03:32:28.836340 kernel: ftrace: allocated 157 pages with 5 groups May 27 03:32:28.836348 kernel: Dynamic Preempt: voluntary May 27 03:32:28.836355 kernel: rcu: Preemptible hierarchical RCU implementation. May 27 03:32:28.836363 kernel: rcu: RCU event tracing is enabled. May 27 03:32:28.836371 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 27 03:32:28.836379 kernel: Trampoline variant of Tasks RCU enabled. May 27 03:32:28.836387 kernel: Rude variant of Tasks RCU enabled. May 27 03:32:28.836395 kernel: Tracing variant of Tasks RCU enabled. May 27 03:32:28.836403 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 27 03:32:28.836412 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 27 03:32:28.836420 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 27 03:32:28.836428 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 27 03:32:28.836436 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 27 03:32:28.836443 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 27 03:32:28.836451 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 27 03:32:28.836459 kernel: Console: colour dummy device 80x25 May 27 03:32:28.836466 kernel: printk: legacy console [ttyS0] enabled May 27 03:32:28.836474 kernel: ACPI: Core revision 20240827 May 27 03:32:28.836483 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 27 03:32:28.836491 kernel: APIC: Switch to symmetric I/O mode setup May 27 03:32:28.836499 kernel: x2apic enabled May 27 03:32:28.836506 kernel: APIC: Switched APIC routing to: physical x2apic May 27 03:32:28.836514 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 27 03:32:28.836522 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 27 03:32:28.836530 kernel: kvm-guest: setup PV IPIs May 27 03:32:28.836537 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 27 03:32:28.836545 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 27 03:32:28.836555 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 27 03:32:28.836563 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 27 03:32:28.836570 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 27 03:32:28.836578 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 27 03:32:28.836585 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 27 03:32:28.836593 kernel: Spectre V2 : Mitigation: Retpolines May 27 03:32:28.836601 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 27 03:32:28.836608 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 27 03:32:28.836616 kernel: RETBleed: Mitigation: untrained return thunk May 27 03:32:28.836626 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 27 03:32:28.836633 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 27 03:32:28.836650 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 27 03:32:28.836658 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 27 03:32:28.836666 kernel: x86/bugs: return thunk changed May 27 03:32:28.836674 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 27 03:32:28.836682 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 27 03:32:28.836690 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 27 03:32:28.836700 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 27 03:32:28.836707 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 27 03:32:28.836715 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 27 03:32:28.836723 kernel: Freeing SMP alternatives memory: 32K May 27 03:32:28.836730 kernel: pid_max: default: 32768 minimum: 301 May 27 03:32:28.836738 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 27 03:32:28.836746 kernel: landlock: Up and running. May 27 03:32:28.836753 kernel: SELinux: Initializing. May 27 03:32:28.836761 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 03:32:28.836771 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 03:32:28.836779 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 27 03:32:28.836786 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 27 03:32:28.836794 kernel: ... version: 0 May 27 03:32:28.836801 kernel: ... bit width: 48 May 27 03:32:28.836809 kernel: ... generic registers: 6 May 27 03:32:28.836817 kernel: ... value mask: 0000ffffffffffff May 27 03:32:28.836824 kernel: ... max period: 00007fffffffffff May 27 03:32:28.836832 kernel: ... fixed-purpose events: 0 May 27 03:32:28.836841 kernel: ... event mask: 000000000000003f May 27 03:32:28.836849 kernel: signal: max sigframe size: 1776 May 27 03:32:28.836856 kernel: rcu: Hierarchical SRCU implementation. May 27 03:32:28.836864 kernel: rcu: Max phase no-delay instances is 400. May 27 03:32:28.836872 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 27 03:32:28.836880 kernel: smp: Bringing up secondary CPUs ... May 27 03:32:28.836887 kernel: smpboot: x86: Booting SMP configuration: May 27 03:32:28.836895 kernel: .... node #0, CPUs: #1 #2 #3 May 27 03:32:28.836902 kernel: smp: Brought up 1 node, 4 CPUs May 27 03:32:28.836910 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 27 03:32:28.836920 kernel: Memory: 2409216K/2552216K available (14336K kernel code, 2430K rwdata, 9952K rodata, 54416K init, 2552K bss, 137064K reserved, 0K cma-reserved) May 27 03:32:28.836927 kernel: devtmpfs: initialized May 27 03:32:28.836935 kernel: x86/mm: Memory block size: 128MB May 27 03:32:28.836943 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) May 27 03:32:28.836963 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) May 27 03:32:28.836971 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 27 03:32:28.836978 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 27 03:32:28.836986 kernel: pinctrl core: initialized pinctrl subsystem May 27 03:32:28.836996 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 27 03:32:28.837004 kernel: audit: initializing netlink subsys (disabled) May 27 03:32:28.837011 kernel: audit: type=2000 audit(1748316747.117:1): state=initialized audit_enabled=0 res=1 May 27 03:32:28.837019 kernel: thermal_sys: Registered thermal governor 'step_wise' May 27 03:32:28.837027 kernel: thermal_sys: Registered thermal governor 'user_space' May 27 03:32:28.837034 kernel: cpuidle: using governor menu May 27 03:32:28.837042 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 27 03:32:28.837050 kernel: dca service started, version 1.12.1 May 27 03:32:28.837058 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] May 27 03:32:28.837067 kernel: PCI: Using configuration type 1 for base access May 27 03:32:28.837075 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 27 03:32:28.837083 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 27 03:32:28.837090 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 27 03:32:28.837098 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 27 03:32:28.837106 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 27 03:32:28.837113 kernel: ACPI: Added _OSI(Module Device) May 27 03:32:28.837121 kernel: ACPI: Added _OSI(Processor Device) May 27 03:32:28.837129 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 27 03:32:28.837138 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 27 03:32:28.837146 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 27 03:32:28.837153 kernel: ACPI: Interpreter enabled May 27 03:32:28.837161 kernel: ACPI: PM: (supports S0 S5) May 27 03:32:28.837168 kernel: ACPI: Using IOAPIC for interrupt routing May 27 03:32:28.837176 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 27 03:32:28.837184 kernel: PCI: Using E820 reservations for host bridge windows May 27 03:32:28.837192 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 27 03:32:28.837199 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 27 03:32:28.837374 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 27 03:32:28.837493 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 27 03:32:28.837608 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 27 03:32:28.837618 kernel: PCI host bridge to bus 0000:00 May 27 03:32:28.837747 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 27 03:32:28.837853 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 27 03:32:28.838057 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 27 03:32:28.838172 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] May 27 03:32:28.838276 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] May 27 03:32:28.838379 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] May 27 03:32:28.838483 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 27 03:32:28.838616 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint May 27 03:32:28.838752 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint May 27 03:32:28.838874 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] May 27 03:32:28.839002 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] May 27 03:32:28.839122 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] May 27 03:32:28.839236 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 27 03:32:28.839359 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 27 03:32:28.839475 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] May 27 03:32:28.839590 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] May 27 03:32:28.839773 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] May 27 03:32:28.839907 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 27 03:32:28.840091 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] May 27 03:32:28.840211 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] May 27 03:32:28.840325 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] May 27 03:32:28.840449 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 27 03:32:28.840568 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] May 27 03:32:28.840692 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] May 27 03:32:28.840806 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] May 27 03:32:28.840919 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] May 27 03:32:28.841071 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint May 27 03:32:28.841193 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 27 03:32:28.841314 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint May 27 03:32:28.841433 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] May 27 03:32:28.841545 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] May 27 03:32:28.841674 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint May 27 03:32:28.841791 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] May 27 03:32:28.841801 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 27 03:32:28.841809 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 27 03:32:28.841818 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 27 03:32:28.841829 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 27 03:32:28.841836 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 27 03:32:28.841844 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 27 03:32:28.841852 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 27 03:32:28.841860 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 27 03:32:28.841867 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 27 03:32:28.841875 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 27 03:32:28.841883 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 27 03:32:28.841891 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 27 03:32:28.841901 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 27 03:32:28.841908 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 27 03:32:28.841916 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 27 03:32:28.841924 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 27 03:32:28.841932 kernel: iommu: Default domain type: Translated May 27 03:32:28.841939 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 27 03:32:28.841972 kernel: efivars: Registered efivars operations May 27 03:32:28.841980 kernel: PCI: Using ACPI for IRQ routing May 27 03:32:28.841988 kernel: PCI: pci_cache_line_size set to 64 bytes May 27 03:32:28.841996 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] May 27 03:32:28.842009 kernel: e820: reserve RAM buffer [mem 0x9a102018-0x9bffffff] May 27 03:32:28.842020 kernel: e820: reserve RAM buffer [mem 0x9a13f018-0x9bffffff] May 27 03:32:28.842030 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] May 27 03:32:28.842038 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] May 27 03:32:28.842159 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 27 03:32:28.842287 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 27 03:32:28.842401 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 27 03:32:28.842411 kernel: vgaarb: loaded May 27 03:32:28.842422 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 27 03:32:28.842430 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 27 03:32:28.842438 kernel: clocksource: Switched to clocksource kvm-clock May 27 03:32:28.842446 kernel: VFS: Disk quotas dquot_6.6.0 May 27 03:32:28.842454 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 27 03:32:28.842462 kernel: pnp: PnP ACPI init May 27 03:32:28.842589 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved May 27 03:32:28.842601 kernel: pnp: PnP ACPI: found 6 devices May 27 03:32:28.842612 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 27 03:32:28.842619 kernel: NET: Registered PF_INET protocol family May 27 03:32:28.842627 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 27 03:32:28.842635 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 27 03:32:28.842652 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 27 03:32:28.842660 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 27 03:32:28.842668 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 27 03:32:28.842676 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 27 03:32:28.842683 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 03:32:28.842694 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 03:32:28.842702 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 27 03:32:28.842709 kernel: NET: Registered PF_XDP protocol family May 27 03:32:28.842826 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window May 27 03:32:28.842941 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned May 27 03:32:28.843116 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 27 03:32:28.843221 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 27 03:32:28.843324 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 27 03:32:28.843458 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] May 27 03:32:28.843562 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] May 27 03:32:28.843675 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] May 27 03:32:28.843685 kernel: PCI: CLS 0 bytes, default 64 May 27 03:32:28.843694 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 27 03:32:28.843702 kernel: Initialise system trusted keyrings May 27 03:32:28.843710 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 27 03:32:28.843718 kernel: Key type asymmetric registered May 27 03:32:28.843726 kernel: Asymmetric key parser 'x509' registered May 27 03:32:28.843748 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 27 03:32:28.843758 kernel: io scheduler mq-deadline registered May 27 03:32:28.843766 kernel: io scheduler kyber registered May 27 03:32:28.843776 kernel: io scheduler bfq registered May 27 03:32:28.843784 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 27 03:32:28.843793 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 27 03:32:28.843801 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 27 03:32:28.843809 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 27 03:32:28.843819 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 27 03:32:28.843829 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 27 03:32:28.843837 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 27 03:32:28.843845 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 27 03:32:28.843853 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 27 03:32:28.843987 kernel: rtc_cmos 00:04: RTC can wake from S4 May 27 03:32:28.844000 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 27 03:32:28.844127 kernel: rtc_cmos 00:04: registered as rtc0 May 27 03:32:28.844236 kernel: rtc_cmos 00:04: setting system clock to 2025-05-27T03:32:28 UTC (1748316748) May 27 03:32:28.844347 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 27 03:32:28.844357 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 27 03:32:28.844365 kernel: efifb: probing for efifb May 27 03:32:28.844373 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k May 27 03:32:28.844382 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 May 27 03:32:28.844390 kernel: efifb: scrolling: redraw May 27 03:32:28.844398 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 27 03:32:28.844406 kernel: Console: switching to colour frame buffer device 160x50 May 27 03:32:28.844416 kernel: fb0: EFI VGA frame buffer device May 27 03:32:28.844425 kernel: pstore: Using crash dump compression: deflate May 27 03:32:28.844435 kernel: pstore: Registered efi_pstore as persistent store backend May 27 03:32:28.844443 kernel: NET: Registered PF_INET6 protocol family May 27 03:32:28.844451 kernel: Segment Routing with IPv6 May 27 03:32:28.844459 kernel: In-situ OAM (IOAM) with IPv6 May 27 03:32:28.844469 kernel: NET: Registered PF_PACKET protocol family May 27 03:32:28.844477 kernel: Key type dns_resolver registered May 27 03:32:28.844485 kernel: IPI shorthand broadcast: enabled May 27 03:32:28.844493 kernel: sched_clock: Marking stable (2737002522, 139838962)->(2894050622, -17209138) May 27 03:32:28.844501 kernel: registered taskstats version 1 May 27 03:32:28.844509 kernel: Loading compiled-in X.509 certificates May 27 03:32:28.844517 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.30-flatcar: ba9eddccb334a70147f3ddfe4fbde029feaa991d' May 27 03:32:28.844525 kernel: Demotion targets for Node 0: null May 27 03:32:28.844533 kernel: Key type .fscrypt registered May 27 03:32:28.844543 kernel: Key type fscrypt-provisioning registered May 27 03:32:28.844551 kernel: ima: No TPM chip found, activating TPM-bypass! May 27 03:32:28.844559 kernel: ima: Allocated hash algorithm: sha1 May 27 03:32:28.844567 kernel: ima: No architecture policies found May 27 03:32:28.844575 kernel: clk: Disabling unused clocks May 27 03:32:28.844584 kernel: Warning: unable to open an initial console. May 27 03:32:28.844592 kernel: Freeing unused kernel image (initmem) memory: 54416K May 27 03:32:28.844600 kernel: Write protecting the kernel read-only data: 24576k May 27 03:32:28.844608 kernel: Freeing unused kernel image (rodata/data gap) memory: 288K May 27 03:32:28.844618 kernel: Run /init as init process May 27 03:32:28.844626 kernel: with arguments: May 27 03:32:28.844634 kernel: /init May 27 03:32:28.844650 kernel: with environment: May 27 03:32:28.844658 kernel: HOME=/ May 27 03:32:28.844666 kernel: TERM=linux May 27 03:32:28.844674 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 27 03:32:28.844683 systemd[1]: Successfully made /usr/ read-only. May 27 03:32:28.844698 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 03:32:28.844707 systemd[1]: Detected virtualization kvm. May 27 03:32:28.844715 systemd[1]: Detected architecture x86-64. May 27 03:32:28.844724 systemd[1]: Running in initrd. May 27 03:32:28.844732 systemd[1]: No hostname configured, using default hostname. May 27 03:32:28.844741 systemd[1]: Hostname set to . May 27 03:32:28.844749 systemd[1]: Initializing machine ID from VM UUID. May 27 03:32:28.844758 systemd[1]: Queued start job for default target initrd.target. May 27 03:32:28.844768 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 03:32:28.844777 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 03:32:28.844786 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 27 03:32:28.844795 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 03:32:28.844804 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 27 03:32:28.844814 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 27 03:32:28.844825 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 27 03:32:28.844834 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 27 03:32:28.844843 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 03:32:28.844852 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 03:32:28.844860 systemd[1]: Reached target paths.target - Path Units. May 27 03:32:28.844868 systemd[1]: Reached target slices.target - Slice Units. May 27 03:32:28.844877 systemd[1]: Reached target swap.target - Swaps. May 27 03:32:28.844885 systemd[1]: Reached target timers.target - Timer Units. May 27 03:32:28.844894 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 27 03:32:28.844905 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 03:32:28.844913 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 27 03:32:28.844922 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 27 03:32:28.844930 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 03:32:28.844939 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 03:32:28.844967 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 03:32:28.844976 systemd[1]: Reached target sockets.target - Socket Units. May 27 03:32:28.844984 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 27 03:32:28.844995 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 03:32:28.845006 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 27 03:32:28.845019 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 27 03:32:28.845030 systemd[1]: Starting systemd-fsck-usr.service... May 27 03:32:28.845040 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 03:32:28.845052 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 03:32:28.845061 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:32:28.845070 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 27 03:32:28.845084 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 03:32:28.845093 systemd[1]: Finished systemd-fsck-usr.service. May 27 03:32:28.845104 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 03:32:28.845133 systemd-journald[219]: Collecting audit messages is disabled. May 27 03:32:28.845157 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:32:28.845165 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 03:32:28.845174 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 27 03:32:28.845183 systemd-journald[219]: Journal started May 27 03:32:28.845204 systemd-journald[219]: Runtime Journal (/run/log/journal/c10a814c77114ec2a4051388d7d4c26e) is 6M, max 48.2M, 42.2M free. May 27 03:32:28.833894 systemd-modules-load[221]: Inserted module 'overlay' May 27 03:32:28.849961 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 03:32:28.851032 systemd[1]: Started systemd-journald.service - Journal Service. May 27 03:32:28.855159 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 03:32:28.863979 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 27 03:32:28.865450 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 03:32:28.866275 kernel: Bridge firewalling registered May 27 03:32:28.865656 systemd-modules-load[221]: Inserted module 'br_netfilter' May 27 03:32:28.867131 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 03:32:28.869530 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 03:32:28.872526 systemd-tmpfiles[243]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 27 03:32:28.872665 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 03:32:28.874132 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 27 03:32:28.884178 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 03:32:28.893463 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 03:32:28.895661 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 03:32:28.898608 dracut-cmdline[257]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f6c186658a19d5a08471ef76df75f82494b37b46908f9237b2c3cf497da860c6 May 27 03:32:28.941402 systemd-resolved[272]: Positive Trust Anchors: May 27 03:32:28.941416 systemd-resolved[272]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 03:32:28.941447 systemd-resolved[272]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 03:32:28.943822 systemd-resolved[272]: Defaulting to hostname 'linux'. May 27 03:32:28.944871 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 03:32:28.950393 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 03:32:29.006974 kernel: SCSI subsystem initialized May 27 03:32:29.015975 kernel: Loading iSCSI transport class v2.0-870. May 27 03:32:29.025996 kernel: iscsi: registered transport (tcp) May 27 03:32:29.046978 kernel: iscsi: registered transport (qla4xxx) May 27 03:32:29.047012 kernel: QLogic iSCSI HBA Driver May 27 03:32:29.066195 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 03:32:29.089817 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 03:32:29.090577 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 03:32:29.150123 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 27 03:32:29.153619 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 27 03:32:29.216986 kernel: raid6: avx2x4 gen() 29330 MB/s May 27 03:32:29.233983 kernel: raid6: avx2x2 gen() 30887 MB/s May 27 03:32:29.251062 kernel: raid6: avx2x1 gen() 25618 MB/s May 27 03:32:29.251084 kernel: raid6: using algorithm avx2x2 gen() 30887 MB/s May 27 03:32:29.269103 kernel: raid6: .... xor() 18724 MB/s, rmw enabled May 27 03:32:29.269127 kernel: raid6: using avx2x2 recovery algorithm May 27 03:32:29.289977 kernel: xor: automatically using best checksumming function avx May 27 03:32:29.451985 kernel: Btrfs loaded, zoned=no, fsverity=no May 27 03:32:29.460688 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 27 03:32:29.462758 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 03:32:29.494854 systemd-udevd[473]: Using default interface naming scheme 'v255'. May 27 03:32:29.500289 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 03:32:29.501284 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 27 03:32:29.527224 dracut-pre-trigger[475]: rd.md=0: removing MD RAID activation May 27 03:32:29.555643 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 27 03:32:29.557974 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 03:32:29.631744 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 03:32:29.635361 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 27 03:32:29.667982 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 27 03:32:29.672090 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 27 03:32:29.677275 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 27 03:32:29.677297 kernel: GPT:9289727 != 19775487 May 27 03:32:29.677316 kernel: GPT:Alternate GPT header not at the end of the disk. May 27 03:32:29.677326 kernel: GPT:9289727 != 19775487 May 27 03:32:29.677335 kernel: GPT: Use GNU Parted to correct GPT errors. May 27 03:32:29.677346 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 03:32:29.683985 kernel: cryptd: max_cpu_qlen set to 1000 May 27 03:32:29.690971 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 27 03:32:29.699978 kernel: libata version 3.00 loaded. May 27 03:32:29.702374 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 03:32:29.703671 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:32:29.708635 kernel: AES CTR mode by8 optimization enabled May 27 03:32:29.705609 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:32:29.709341 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:32:29.721762 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 03:32:29.721876 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:32:29.731978 kernel: ahci 0000:00:1f.2: version 3.0 May 27 03:32:29.736001 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 27 03:32:29.736024 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode May 27 03:32:29.736200 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) May 27 03:32:29.736367 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 27 03:32:29.739976 kernel: scsi host0: ahci May 27 03:32:29.742971 kernel: scsi host1: ahci May 27 03:32:29.743980 kernel: scsi host2: ahci May 27 03:32:29.745151 kernel: scsi host3: ahci May 27 03:32:29.745328 kernel: scsi host4: ahci May 27 03:32:29.746060 kernel: scsi host5: ahci May 27 03:32:29.747134 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 0 May 27 03:32:29.747186 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 0 May 27 03:32:29.749826 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 0 May 27 03:32:29.749844 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 0 May 27 03:32:29.749854 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 0 May 27 03:32:29.751638 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 0 May 27 03:32:29.756063 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 27 03:32:29.768896 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 27 03:32:29.782943 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 27 03:32:29.791740 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 27 03:32:29.792184 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 27 03:32:29.797413 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 27 03:32:29.798497 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:32:29.821529 disk-uuid[631]: Primary Header is updated. May 27 03:32:29.821529 disk-uuid[631]: Secondary Entries is updated. May 27 03:32:29.821529 disk-uuid[631]: Secondary Header is updated. May 27 03:32:29.825998 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 03:32:29.827342 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:32:29.831014 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 03:32:30.058044 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 27 03:32:30.058098 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 27 03:32:30.058109 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 27 03:32:30.058984 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 27 03:32:30.059979 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 27 03:32:30.060976 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 27 03:32:30.061982 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 27 03:32:30.061994 kernel: ata3.00: applying bridge limits May 27 03:32:30.063012 kernel: ata3.00: configured for UDMA/100 May 27 03:32:30.063995 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 27 03:32:30.116552 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 27 03:32:30.116762 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 27 03:32:30.136985 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 27 03:32:30.550640 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 27 03:32:30.552341 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 27 03:32:30.554188 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 03:32:30.555453 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 03:32:30.558435 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 27 03:32:30.590926 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 27 03:32:30.863512 disk-uuid[634]: The operation has completed successfully. May 27 03:32:30.864833 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 03:32:30.893842 systemd[1]: disk-uuid.service: Deactivated successfully. May 27 03:32:30.893973 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 27 03:32:30.926640 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 27 03:32:30.952046 sh[666]: Success May 27 03:32:30.969980 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 27 03:32:30.970006 kernel: device-mapper: uevent: version 1.0.3 May 27 03:32:30.971979 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 27 03:32:30.979977 kernel: device-mapper: verity: sha256 using shash "sha256-ni" May 27 03:32:31.008842 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 27 03:32:31.012606 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 27 03:32:31.024991 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 27 03:32:31.029970 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 27 03:32:31.032694 kernel: BTRFS: device fsid f0f66fe8-3990-49eb-980e-559a3dfd3522 devid 1 transid 40 /dev/mapper/usr (253:0) scanned by mount (678) May 27 03:32:31.032710 kernel: BTRFS info (device dm-0): first mount of filesystem f0f66fe8-3990-49eb-980e-559a3dfd3522 May 27 03:32:31.032727 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 27 03:32:31.033667 kernel: BTRFS info (device dm-0): using free-space-tree May 27 03:32:31.038839 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 27 03:32:31.040143 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 27 03:32:31.041630 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 27 03:32:31.042339 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 27 03:32:31.044122 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 27 03:32:31.071774 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (711) May 27 03:32:31.071813 kernel: BTRFS info (device vda6): first mount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:32:31.071824 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 03:32:31.072647 kernel: BTRFS info (device vda6): using free-space-tree May 27 03:32:31.080965 kernel: BTRFS info (device vda6): last unmount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:32:31.081619 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 27 03:32:31.083559 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 27 03:32:31.160361 ignition[754]: Ignition 2.21.0 May 27 03:32:31.160375 ignition[754]: Stage: fetch-offline May 27 03:32:31.160411 ignition[754]: no configs at "/usr/lib/ignition/base.d" May 27 03:32:31.160420 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 03:32:31.160510 ignition[754]: parsed url from cmdline: "" May 27 03:32:31.160514 ignition[754]: no config URL provided May 27 03:32:31.160519 ignition[754]: reading system config file "/usr/lib/ignition/user.ign" May 27 03:32:31.160527 ignition[754]: no config at "/usr/lib/ignition/user.ign" May 27 03:32:31.160550 ignition[754]: op(1): [started] loading QEMU firmware config module May 27 03:32:31.160555 ignition[754]: op(1): executing: "modprobe" "qemu_fw_cfg" May 27 03:32:31.171520 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 03:32:31.171869 ignition[754]: op(1): [finished] loading QEMU firmware config module May 27 03:32:31.171897 ignition[754]: QEMU firmware config was not found. Ignoring... May 27 03:32:31.176862 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 03:32:31.215133 ignition[754]: parsing config with SHA512: 868616ccfb66a738a270bedcb853f7affa6caab434cbcc22ed13abc0a13fce3a7344327b693e55b605f0ba32961dea2164c5694212854ad081bcfdaf41669df0 May 27 03:32:31.220749 unknown[754]: fetched base config from "system" May 27 03:32:31.220763 unknown[754]: fetched user config from "qemu" May 27 03:32:31.221088 ignition[754]: fetch-offline: fetch-offline passed May 27 03:32:31.221436 systemd-networkd[857]: lo: Link UP May 27 03:32:31.221137 ignition[754]: Ignition finished successfully May 27 03:32:31.221441 systemd-networkd[857]: lo: Gained carrier May 27 03:32:31.222901 systemd-networkd[857]: Enumeration completed May 27 03:32:31.223006 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 03:32:31.223418 systemd-networkd[857]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:32:31.223422 systemd-networkd[857]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 03:32:31.224109 systemd-networkd[857]: eth0: Link UP May 27 03:32:31.224113 systemd-networkd[857]: eth0: Gained carrier May 27 03:32:31.224120 systemd-networkd[857]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:32:31.226074 systemd[1]: Reached target network.target - Network. May 27 03:32:31.230004 systemd-networkd[857]: eth0: DHCPv4 address 10.0.0.8/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 27 03:32:31.232029 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 27 03:32:31.233911 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 27 03:32:31.234813 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 27 03:32:31.265459 ignition[862]: Ignition 2.21.0 May 27 03:32:31.265473 ignition[862]: Stage: kargs May 27 03:32:31.265606 ignition[862]: no configs at "/usr/lib/ignition/base.d" May 27 03:32:31.265617 ignition[862]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 03:32:31.266792 ignition[862]: kargs: kargs passed May 27 03:32:31.266845 ignition[862]: Ignition finished successfully May 27 03:32:31.271090 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 27 03:32:31.274079 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 27 03:32:31.309592 ignition[870]: Ignition 2.21.0 May 27 03:32:31.309604 ignition[870]: Stage: disks May 27 03:32:31.309763 ignition[870]: no configs at "/usr/lib/ignition/base.d" May 27 03:32:31.309777 ignition[870]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 03:32:31.310765 ignition[870]: disks: disks passed May 27 03:32:31.310812 ignition[870]: Ignition finished successfully May 27 03:32:31.317194 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 27 03:32:31.318433 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 27 03:32:31.318882 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 27 03:32:31.319369 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 03:32:31.319711 systemd[1]: Reached target sysinit.target - System Initialization. May 27 03:32:31.320198 systemd[1]: Reached target basic.target - Basic System. May 27 03:32:31.321440 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 27 03:32:31.353593 systemd-resolved[272]: Detected conflict on linux IN A 10.0.0.8 May 27 03:32:31.353607 systemd-resolved[272]: Hostname conflict, changing published hostname from 'linux' to 'linux5'. May 27 03:32:31.354852 systemd-fsck[880]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 27 03:32:31.470192 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 27 03:32:31.474084 systemd[1]: Mounting sysroot.mount - /sysroot... May 27 03:32:31.578972 kernel: EXT4-fs (vda9): mounted filesystem 18301365-b380-45d7-9677-e42472a122bc r/w with ordered data mode. Quota mode: none. May 27 03:32:31.579001 systemd[1]: Mounted sysroot.mount - /sysroot. May 27 03:32:31.581078 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 27 03:32:31.584261 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 03:32:31.586702 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 27 03:32:31.588656 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 27 03:32:31.588702 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 27 03:32:31.588723 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 27 03:32:31.601347 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 27 03:32:31.603164 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 27 03:32:31.608408 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (888) May 27 03:32:31.608431 kernel: BTRFS info (device vda6): first mount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:32:31.608442 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 03:32:31.608452 kernel: BTRFS info (device vda6): using free-space-tree May 27 03:32:31.612787 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 03:32:31.638111 initrd-setup-root[912]: cut: /sysroot/etc/passwd: No such file or directory May 27 03:32:31.643354 initrd-setup-root[919]: cut: /sysroot/etc/group: No such file or directory May 27 03:32:31.646866 initrd-setup-root[926]: cut: /sysroot/etc/shadow: No such file or directory May 27 03:32:31.650772 initrd-setup-root[933]: cut: /sysroot/etc/gshadow: No such file or directory May 27 03:32:31.733566 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 27 03:32:31.734783 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 27 03:32:31.736982 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 27 03:32:31.752979 kernel: BTRFS info (device vda6): last unmount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:32:31.768074 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 27 03:32:31.780343 ignition[1003]: INFO : Ignition 2.21.0 May 27 03:32:31.780343 ignition[1003]: INFO : Stage: mount May 27 03:32:31.782072 ignition[1003]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 03:32:31.782072 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 03:32:31.785221 ignition[1003]: INFO : mount: mount passed May 27 03:32:31.785221 ignition[1003]: INFO : Ignition finished successfully May 27 03:32:31.788009 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 27 03:32:31.790975 systemd[1]: Starting ignition-files.service - Ignition (files)... May 27 03:32:32.030534 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 27 03:32:32.032223 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 03:32:32.062252 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (1015) May 27 03:32:32.062286 kernel: BTRFS info (device vda6): first mount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:32:32.062297 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 03:32:32.063972 kernel: BTRFS info (device vda6): using free-space-tree May 27 03:32:32.067093 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 03:32:32.098032 ignition[1032]: INFO : Ignition 2.21.0 May 27 03:32:32.098032 ignition[1032]: INFO : Stage: files May 27 03:32:32.099595 ignition[1032]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 03:32:32.099595 ignition[1032]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 03:32:32.101807 ignition[1032]: DEBUG : files: compiled without relabeling support, skipping May 27 03:32:32.102908 ignition[1032]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 27 03:32:32.102908 ignition[1032]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 27 03:32:32.105964 ignition[1032]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 27 03:32:32.105964 ignition[1032]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 27 03:32:32.105964 ignition[1032]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 27 03:32:32.105415 unknown[1032]: wrote ssh authorized keys file for user: core May 27 03:32:32.111102 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 27 03:32:32.111102 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 27 03:32:32.186798 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 27 03:32:32.337386 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 27 03:32:32.337386 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 27 03:32:32.341213 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 27 03:32:32.341213 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 27 03:32:32.341213 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 27 03:32:32.341213 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 03:32:32.341213 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 03:32:32.341213 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 03:32:32.341213 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 03:32:32.353380 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 27 03:32:32.353380 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 27 03:32:32.353380 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 27 03:32:32.353380 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 27 03:32:32.353380 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 27 03:32:32.353380 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 May 27 03:32:32.819130 systemd-networkd[857]: eth0: Gained IPv6LL May 27 03:32:33.002748 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 27 03:32:33.398825 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 27 03:32:33.398825 ignition[1032]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 27 03:32:33.402859 ignition[1032]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 03:32:33.405333 ignition[1032]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 03:32:33.405333 ignition[1032]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 27 03:32:33.405333 ignition[1032]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 27 03:32:33.410142 ignition[1032]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 27 03:32:33.410142 ignition[1032]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 27 03:32:33.410142 ignition[1032]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 27 03:32:33.410142 ignition[1032]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 27 03:32:33.424238 ignition[1032]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 27 03:32:33.427913 ignition[1032]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 27 03:32:33.429553 ignition[1032]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 27 03:32:33.429553 ignition[1032]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 27 03:32:33.429553 ignition[1032]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 27 03:32:33.429553 ignition[1032]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 27 03:32:33.429553 ignition[1032]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 27 03:32:33.429553 ignition[1032]: INFO : files: files passed May 27 03:32:33.429553 ignition[1032]: INFO : Ignition finished successfully May 27 03:32:33.439918 systemd[1]: Finished ignition-files.service - Ignition (files). May 27 03:32:33.442812 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 27 03:32:33.445090 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 27 03:32:33.458820 systemd[1]: ignition-quench.service: Deactivated successfully. May 27 03:32:33.458967 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 27 03:32:33.462072 initrd-setup-root-after-ignition[1061]: grep: /sysroot/oem/oem-release: No such file or directory May 27 03:32:33.463518 initrd-setup-root-after-ignition[1067]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 03:32:33.465105 initrd-setup-root-after-ignition[1063]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 03:32:33.465105 initrd-setup-root-after-ignition[1063]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 27 03:32:33.465050 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 03:32:33.465649 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 27 03:32:33.471613 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 27 03:32:33.531486 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 27 03:32:33.531615 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 27 03:32:33.533851 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 27 03:32:33.536006 systemd[1]: Reached target initrd.target - Initrd Default Target. May 27 03:32:33.538108 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 27 03:32:33.538861 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 27 03:32:33.575217 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 03:32:33.578891 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 27 03:32:33.597012 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 27 03:32:33.597353 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 03:32:33.599716 systemd[1]: Stopped target timers.target - Timer Units. May 27 03:32:33.602271 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 27 03:32:33.602380 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 03:32:33.605820 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 27 03:32:33.608107 systemd[1]: Stopped target basic.target - Basic System. May 27 03:32:33.608688 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 27 03:32:33.609224 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 27 03:32:33.609591 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 27 03:32:33.609960 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 27 03:32:33.610310 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 27 03:32:33.610678 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 27 03:32:33.611229 systemd[1]: Stopped target sysinit.target - System Initialization. May 27 03:32:33.611595 systemd[1]: Stopped target local-fs.target - Local File Systems. May 27 03:32:33.611965 systemd[1]: Stopped target swap.target - Swaps. May 27 03:32:33.628193 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 27 03:32:33.628297 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 27 03:32:33.629312 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 27 03:32:33.629720 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 03:32:33.630221 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 27 03:32:33.636273 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 03:32:33.636881 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 27 03:32:33.636991 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 27 03:32:33.642729 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 27 03:32:33.642842 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 27 03:32:33.643484 systemd[1]: Stopped target paths.target - Path Units. May 27 03:32:33.643776 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 27 03:32:33.651004 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 03:32:33.651345 systemd[1]: Stopped target slices.target - Slice Units. May 27 03:32:33.654305 systemd[1]: Stopped target sockets.target - Socket Units. May 27 03:32:33.654667 systemd[1]: iscsid.socket: Deactivated successfully. May 27 03:32:33.654750 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 27 03:32:33.655229 systemd[1]: iscsiuio.socket: Deactivated successfully. May 27 03:32:33.655308 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 03:32:33.659453 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 27 03:32:33.659569 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 03:32:33.661410 systemd[1]: ignition-files.service: Deactivated successfully. May 27 03:32:33.661509 systemd[1]: Stopped ignition-files.service - Ignition (files). May 27 03:32:33.666522 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 27 03:32:33.667164 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 27 03:32:33.667268 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 27 03:32:33.668235 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 27 03:32:33.671671 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 27 03:32:33.671786 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 27 03:32:33.672244 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 27 03:32:33.672335 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 27 03:32:33.680643 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 27 03:32:33.680748 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 27 03:32:33.693979 ignition[1087]: INFO : Ignition 2.21.0 May 27 03:32:33.695010 ignition[1087]: INFO : Stage: umount May 27 03:32:33.695780 ignition[1087]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 03:32:33.695780 ignition[1087]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 03:32:33.698051 ignition[1087]: INFO : umount: umount passed May 27 03:32:33.698051 ignition[1087]: INFO : Ignition finished successfully May 27 03:32:33.699975 systemd[1]: ignition-mount.service: Deactivated successfully. May 27 03:32:33.700101 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 27 03:32:33.702789 systemd[1]: Stopped target network.target - Network. May 27 03:32:33.704436 systemd[1]: ignition-disks.service: Deactivated successfully. May 27 03:32:33.704499 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 27 03:32:33.705395 systemd[1]: ignition-kargs.service: Deactivated successfully. May 27 03:32:33.705438 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 27 03:32:33.705726 systemd[1]: ignition-setup.service: Deactivated successfully. May 27 03:32:33.705772 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 27 03:32:33.706224 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 27 03:32:33.706264 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 27 03:32:33.706661 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 27 03:32:33.707028 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 27 03:32:33.716326 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 27 03:32:33.720148 systemd[1]: systemd-resolved.service: Deactivated successfully. May 27 03:32:33.720267 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 27 03:32:33.724475 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 27 03:32:33.724748 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 27 03:32:33.724794 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 03:32:33.729729 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 27 03:32:33.730531 systemd[1]: systemd-networkd.service: Deactivated successfully. May 27 03:32:33.730661 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 27 03:32:33.734236 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 27 03:32:33.734741 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 27 03:32:33.737765 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 27 03:32:33.737813 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 27 03:32:33.739048 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 27 03:32:33.741345 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 27 03:32:33.741393 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 03:32:33.741663 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 03:32:33.741704 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 03:32:33.747576 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 27 03:32:33.747629 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 27 03:32:33.748325 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 03:32:33.754587 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 27 03:32:33.771804 systemd[1]: systemd-udevd.service: Deactivated successfully. May 27 03:32:33.771993 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 03:32:33.772668 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 27 03:32:33.772711 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 27 03:32:33.775721 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 27 03:32:33.775765 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 27 03:32:33.776179 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 27 03:32:33.776225 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 27 03:32:33.781903 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 27 03:32:33.781980 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 27 03:32:33.784602 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 27 03:32:33.784651 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 03:32:33.788561 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 27 03:32:33.789135 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 27 03:32:33.789185 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 27 03:32:33.793852 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 27 03:32:33.793898 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 03:32:33.797325 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 03:32:33.797369 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:32:33.801065 systemd[1]: network-cleanup.service: Deactivated successfully. May 27 03:32:33.802095 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 27 03:32:33.808698 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 27 03:32:33.808815 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 27 03:32:33.894007 systemd[1]: sysroot-boot.service: Deactivated successfully. May 27 03:32:33.894132 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 27 03:32:33.896075 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 27 03:32:33.896426 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 27 03:32:33.896475 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 27 03:32:33.899581 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 27 03:32:33.921280 systemd[1]: Switching root. May 27 03:32:33.975290 systemd-journald[219]: Journal stopped May 27 03:32:35.153756 systemd-journald[219]: Received SIGTERM from PID 1 (systemd). May 27 03:32:35.153825 kernel: SELinux: policy capability network_peer_controls=1 May 27 03:32:35.153839 kernel: SELinux: policy capability open_perms=1 May 27 03:32:35.153855 kernel: SELinux: policy capability extended_socket_class=1 May 27 03:32:35.153866 kernel: SELinux: policy capability always_check_network=0 May 27 03:32:35.153881 kernel: SELinux: policy capability cgroup_seclabel=1 May 27 03:32:35.153892 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 27 03:32:35.153904 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 27 03:32:35.153915 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 27 03:32:35.153926 kernel: SELinux: policy capability userspace_initial_context=0 May 27 03:32:35.153938 kernel: audit: type=1403 audit(1748316754.396:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 27 03:32:35.153975 systemd[1]: Successfully loaded SELinux policy in 46.440ms. May 27 03:32:35.154000 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.146ms. May 27 03:32:35.154014 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 03:32:35.154026 systemd[1]: Detected virtualization kvm. May 27 03:32:35.154040 systemd[1]: Detected architecture x86-64. May 27 03:32:35.154056 systemd[1]: Detected first boot. May 27 03:32:35.154068 systemd[1]: Initializing machine ID from VM UUID. May 27 03:32:35.154080 zram_generator::config[1133]: No configuration found. May 27 03:32:35.154092 kernel: Guest personality initialized and is inactive May 27 03:32:35.154106 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 27 03:32:35.154117 kernel: Initialized host personality May 27 03:32:35.154128 kernel: NET: Registered PF_VSOCK protocol family May 27 03:32:35.154139 systemd[1]: Populated /etc with preset unit settings. May 27 03:32:35.154155 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 27 03:32:35.154167 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 27 03:32:35.154179 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 27 03:32:35.154191 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 27 03:32:35.154203 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 27 03:32:35.154217 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 27 03:32:35.154229 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 27 03:32:35.154240 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 27 03:32:35.154252 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 27 03:32:35.154264 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 27 03:32:35.154276 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 27 03:32:35.154288 systemd[1]: Created slice user.slice - User and Session Slice. May 27 03:32:35.154301 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 03:32:35.154313 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 03:32:35.154327 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 27 03:32:35.154340 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 27 03:32:35.154352 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 27 03:32:35.154364 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 03:32:35.154376 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 27 03:32:35.154388 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 03:32:35.154400 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 03:32:35.154412 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 27 03:32:35.154425 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 27 03:32:35.154437 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 27 03:32:35.154449 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 27 03:32:35.154461 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 03:32:35.154473 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 03:32:35.154485 systemd[1]: Reached target slices.target - Slice Units. May 27 03:32:35.154503 systemd[1]: Reached target swap.target - Swaps. May 27 03:32:35.154516 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 27 03:32:35.154528 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 27 03:32:35.154545 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 27 03:32:35.154557 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 03:32:35.154570 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 03:32:35.154582 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 03:32:35.154594 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 27 03:32:35.154606 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 27 03:32:35.154617 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 27 03:32:35.154629 systemd[1]: Mounting media.mount - External Media Directory... May 27 03:32:35.154641 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:32:35.154655 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 27 03:32:35.154667 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 27 03:32:35.154679 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 27 03:32:35.154691 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 27 03:32:35.154703 systemd[1]: Reached target machines.target - Containers. May 27 03:32:35.154715 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 27 03:32:35.154727 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:32:35.154739 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 03:32:35.154752 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 27 03:32:35.154764 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 03:32:35.154776 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 03:32:35.154788 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 03:32:35.154800 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 27 03:32:35.154812 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 03:32:35.154825 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 27 03:32:35.154837 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 27 03:32:35.154850 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 27 03:32:35.154862 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 27 03:32:35.154874 systemd[1]: Stopped systemd-fsck-usr.service. May 27 03:32:35.154887 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:32:35.154899 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 03:32:35.154910 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 03:32:35.154924 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 03:32:35.154937 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 27 03:32:35.154974 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 27 03:32:35.154988 kernel: loop: module loaded May 27 03:32:35.154998 kernel: fuse: init (API version 7.41) May 27 03:32:35.155010 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 03:32:35.155022 systemd[1]: verity-setup.service: Deactivated successfully. May 27 03:32:35.155034 systemd[1]: Stopped verity-setup.service. May 27 03:32:35.155048 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:32:35.155081 systemd-journald[1197]: Collecting audit messages is disabled. May 27 03:32:35.155104 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 27 03:32:35.155117 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 27 03:32:35.155131 systemd-journald[1197]: Journal started May 27 03:32:35.155154 systemd-journald[1197]: Runtime Journal (/run/log/journal/c10a814c77114ec2a4051388d7d4c26e) is 6M, max 48.2M, 42.2M free. May 27 03:32:35.158547 systemd[1]: Mounted media.mount - External Media Directory. May 27 03:32:34.915827 systemd[1]: Queued start job for default target multi-user.target. May 27 03:32:34.939792 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 27 03:32:34.940242 systemd[1]: systemd-journald.service: Deactivated successfully. May 27 03:32:35.162804 systemd[1]: Started systemd-journald.service - Journal Service. May 27 03:32:35.161756 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 27 03:32:35.162975 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 27 03:32:35.164279 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 27 03:32:35.165680 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 03:32:35.167310 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 27 03:32:35.167534 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 27 03:32:35.168985 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 03:32:35.169207 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 03:32:35.170632 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 03:32:35.170895 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 03:32:35.172359 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 27 03:32:35.172643 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 27 03:32:35.174058 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 03:32:35.174270 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 03:32:35.175650 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 03:32:35.177202 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 03:32:35.179192 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 27 03:32:35.180977 kernel: ACPI: bus type drm_connector registered May 27 03:32:35.181395 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 27 03:32:35.183333 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 03:32:35.183544 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 03:32:35.197619 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 03:32:35.200342 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 27 03:32:35.239539 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 27 03:32:35.240958 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 27 03:32:35.240994 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 03:32:35.243398 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 27 03:32:35.246077 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 27 03:32:35.247339 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:32:35.248943 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 27 03:32:35.265659 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 27 03:32:35.267152 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 03:32:35.268254 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 27 03:32:35.270193 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 03:32:35.272434 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 03:32:35.274212 systemd-journald[1197]: Time spent on flushing to /var/log/journal/c10a814c77114ec2a4051388d7d4c26e is 14.829ms for 1034 entries. May 27 03:32:35.274212 systemd-journald[1197]: System Journal (/var/log/journal/c10a814c77114ec2a4051388d7d4c26e) is 8M, max 195.6M, 187.6M free. May 27 03:32:35.350192 systemd-journald[1197]: Received client request to flush runtime journal. May 27 03:32:35.350238 kernel: loop0: detected capacity change from 0 to 113872 May 27 03:32:35.274658 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 27 03:32:35.280015 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 03:32:35.281966 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 27 03:32:35.283509 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 27 03:32:35.336680 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 27 03:32:35.338167 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 27 03:32:35.341164 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 27 03:32:35.344666 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 03:32:35.361564 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 27 03:32:35.359872 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 27 03:32:35.362435 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 27 03:32:35.366675 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 27 03:32:35.379596 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 27 03:32:35.382019 kernel: loop1: detected capacity change from 0 to 224512 May 27 03:32:35.405318 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 27 03:32:35.408615 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 03:32:35.409989 kernel: loop2: detected capacity change from 0 to 146240 May 27 03:32:35.435098 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. May 27 03:32:35.435690 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. May 27 03:32:35.442059 kernel: loop3: detected capacity change from 0 to 113872 May 27 03:32:35.445081 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 03:32:35.451965 kernel: loop4: detected capacity change from 0 to 224512 May 27 03:32:35.463009 kernel: loop5: detected capacity change from 0 to 146240 May 27 03:32:35.473102 (sd-merge)[1273]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 27 03:32:35.474063 (sd-merge)[1273]: Merged extensions into '/usr'. May 27 03:32:35.478495 systemd[1]: Reload requested from client PID 1237 ('systemd-sysext') (unit systemd-sysext.service)... May 27 03:32:35.478509 systemd[1]: Reloading... May 27 03:32:35.543975 zram_generator::config[1299]: No configuration found. May 27 03:32:35.628404 ldconfig[1232]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 27 03:32:35.644106 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:32:35.723537 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 27 03:32:35.723762 systemd[1]: Reloading finished in 244 ms. May 27 03:32:35.750323 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 27 03:32:35.751900 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 27 03:32:35.765208 systemd[1]: Starting ensure-sysext.service... May 27 03:32:35.766986 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 03:32:35.790094 systemd[1]: Reload requested from client PID 1337 ('systemctl') (unit ensure-sysext.service)... May 27 03:32:35.790115 systemd[1]: Reloading... May 27 03:32:35.799421 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 27 03:32:35.799459 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 27 03:32:35.799749 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 27 03:32:35.800012 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 27 03:32:35.800875 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 27 03:32:35.801149 systemd-tmpfiles[1338]: ACLs are not supported, ignoring. May 27 03:32:35.801223 systemd-tmpfiles[1338]: ACLs are not supported, ignoring. May 27 03:32:35.822016 systemd-tmpfiles[1338]: Detected autofs mount point /boot during canonicalization of boot. May 27 03:32:35.822158 systemd-tmpfiles[1338]: Skipping /boot May 27 03:32:35.835490 systemd-tmpfiles[1338]: Detected autofs mount point /boot during canonicalization of boot. May 27 03:32:35.835503 systemd-tmpfiles[1338]: Skipping /boot May 27 03:32:35.847975 zram_generator::config[1366]: No configuration found. May 27 03:32:35.939939 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:32:36.019092 systemd[1]: Reloading finished in 228 ms. May 27 03:32:36.040400 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 27 03:32:36.059276 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 03:32:36.068201 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 03:32:36.070514 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 27 03:32:36.091415 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 27 03:32:36.095753 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 03:32:36.098743 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 03:32:36.104233 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 27 03:32:36.112095 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:32:36.112324 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:32:36.115197 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 03:32:36.118134 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 03:32:36.128263 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 03:32:36.130112 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:32:36.130267 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:32:36.133736 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 27 03:32:36.135032 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:32:36.137692 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 27 03:32:36.139725 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 03:32:36.139941 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 03:32:36.142415 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 03:32:36.144780 systemd-udevd[1409]: Using default interface naming scheme 'v255'. May 27 03:32:36.150134 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 03:32:36.150862 augenrules[1433]: No rules May 27 03:32:36.152663 systemd[1]: audit-rules.service: Deactivated successfully. May 27 03:32:36.153136 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 03:32:36.156560 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 03:32:36.156858 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 03:32:36.164963 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 03:32:36.165588 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 03:32:36.167486 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 27 03:32:36.169568 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 27 03:32:36.174920 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:32:36.176261 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:32:36.188196 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 03:32:36.190922 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 03:32:36.196174 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 03:32:36.197493 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:32:36.197606 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:32:36.197707 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 27 03:32:36.197779 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:32:36.198624 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 03:32:36.203322 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 27 03:32:36.211490 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 27 03:32:36.214994 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 03:32:36.216311 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 03:32:36.231924 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 03:32:36.236195 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 03:32:36.238422 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 27 03:32:36.241550 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 03:32:36.241754 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 03:32:36.250663 systemd[1]: Finished ensure-sysext.service. May 27 03:32:36.258157 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:32:36.261098 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 03:32:36.262198 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:32:36.263807 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 03:32:36.267239 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 03:32:36.268768 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:32:36.268808 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:32:36.279268 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 03:32:36.282042 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 03:32:36.285085 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 27 03:32:36.286266 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 27 03:32:36.286310 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:32:36.286924 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 03:32:36.287192 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 03:32:36.288777 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 03:32:36.289024 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 03:32:36.294785 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 03:32:36.311061 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 27 03:32:36.324284 augenrules[1486]: /sbin/augenrules: No change May 27 03:32:36.337857 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 27 03:32:36.341283 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 27 03:32:36.345098 augenrules[1517]: No rules May 27 03:32:36.346868 systemd[1]: audit-rules.service: Deactivated successfully. May 27 03:32:36.354324 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 03:32:36.373979 kernel: mousedev: PS/2 mouse device common for all mice May 27 03:32:36.376698 systemd-resolved[1407]: Positive Trust Anchors: May 27 03:32:36.376713 systemd-resolved[1407]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 03:32:36.376743 systemd-resolved[1407]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 03:32:36.383609 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 27 03:32:36.385486 systemd-resolved[1407]: Defaulting to hostname 'linux'. May 27 03:32:36.386967 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 27 03:32:36.388026 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 03:32:36.389260 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 03:32:36.392972 kernel: ACPI: button: Power Button [PWRF] May 27 03:32:36.409062 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 27 03:32:36.409545 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 27 03:32:36.409730 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 27 03:32:36.448539 systemd-networkd[1490]: lo: Link UP May 27 03:32:36.448553 systemd-networkd[1490]: lo: Gained carrier May 27 03:32:36.450262 systemd-networkd[1490]: Enumeration completed May 27 03:32:36.450365 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 03:32:36.451685 systemd[1]: Reached target network.target - Network. May 27 03:32:36.452999 systemd-networkd[1490]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:32:36.453004 systemd-networkd[1490]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 03:32:36.453517 systemd-networkd[1490]: eth0: Link UP May 27 03:32:36.453694 systemd-networkd[1490]: eth0: Gained carrier May 27 03:32:36.453706 systemd-networkd[1490]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:32:36.458125 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 27 03:32:36.462127 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 27 03:32:36.472085 systemd-networkd[1490]: eth0: DHCPv4 address 10.0.0.8/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 27 03:32:36.477662 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 27 03:32:36.478290 systemd-timesyncd[1493]: Network configuration changed, trying to establish connection. May 27 03:32:36.479195 systemd[1]: Reached target sysinit.target - System Initialization. May 27 03:32:36.480394 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 27 03:32:36.481666 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 27 03:32:36.484050 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 27 03:32:36.485199 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 27 03:32:36.486509 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 27 03:32:36.486532 systemd[1]: Reached target paths.target - Path Units. May 27 03:32:36.487508 systemd[1]: Reached target time-set.target - System Time Set. May 27 03:32:36.489062 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 27 03:32:36.490257 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 27 03:32:36.491696 systemd[1]: Reached target timers.target - Timer Units. May 27 03:32:36.494825 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 27 03:32:36.497598 systemd[1]: Starting docker.socket - Docker Socket for the API... May 27 03:32:37.880183 systemd-timesyncd[1493]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 27 03:32:37.880238 systemd-timesyncd[1493]: Initial clock synchronization to Tue 2025-05-27 03:32:37.880099 UTC. May 27 03:32:37.882345 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 27 03:32:37.883887 systemd-resolved[1407]: Clock change detected. Flushing caches. May 27 03:32:37.886867 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 27 03:32:37.888169 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 27 03:32:37.893914 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 27 03:32:37.895779 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 27 03:32:37.898162 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 27 03:32:37.902842 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 27 03:32:37.917246 systemd[1]: Reached target sockets.target - Socket Units. May 27 03:32:37.922378 systemd[1]: Reached target basic.target - Basic System. May 27 03:32:37.923694 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 27 03:32:37.923880 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 27 03:32:37.928158 systemd[1]: Starting containerd.service - containerd container runtime... May 27 03:32:37.931620 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 27 03:32:37.937347 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 27 03:32:37.942299 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 27 03:32:37.944878 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 27 03:32:37.946053 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 27 03:32:37.947312 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 27 03:32:37.949597 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 27 03:32:37.951577 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 27 03:32:37.953681 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 27 03:32:37.957707 jq[1555]: false May 27 03:32:37.960243 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 27 03:32:37.967453 systemd[1]: Starting systemd-logind.service - User Login Management... May 27 03:32:37.969304 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 27 03:32:37.969841 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 27 03:32:37.973026 systemd[1]: Starting update-engine.service - Update Engine... May 27 03:32:37.978653 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 27 03:32:37.985524 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 27 03:32:37.987631 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 27 03:32:37.988060 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 27 03:32:37.989179 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 27 03:32:37.989409 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 27 03:32:38.005141 jq[1566]: true May 27 03:32:38.014003 update_engine[1564]: I20250527 03:32:38.012662 1564 main.cc:92] Flatcar Update Engine starting May 27 03:32:38.014677 google_oslogin_nss_cache[1557]: oslogin_cache_refresh[1557]: Refreshing passwd entry cache May 27 03:32:38.014861 extend-filesystems[1556]: Found loop3 May 27 03:32:38.014861 extend-filesystems[1556]: Found loop4 May 27 03:32:38.014861 extend-filesystems[1556]: Found loop5 May 27 03:32:38.014861 extend-filesystems[1556]: Found sr0 May 27 03:32:38.014861 extend-filesystems[1556]: Found vda May 27 03:32:38.014861 extend-filesystems[1556]: Found vda1 May 27 03:32:38.014861 extend-filesystems[1556]: Found vda2 May 27 03:32:38.014861 extend-filesystems[1556]: Found vda3 May 27 03:32:38.014861 extend-filesystems[1556]: Found usr May 27 03:32:38.014861 extend-filesystems[1556]: Found vda4 May 27 03:32:38.014861 extend-filesystems[1556]: Found vda6 May 27 03:32:38.014861 extend-filesystems[1556]: Found vda7 May 27 03:32:38.014861 extend-filesystems[1556]: Found vda9 May 27 03:32:38.014861 extend-filesystems[1556]: Checking size of /dev/vda9 May 27 03:32:38.041792 kernel: kvm_amd: TSC scaling supported May 27 03:32:38.041842 kernel: kvm_amd: Nested Virtualization enabled May 27 03:32:38.041887 kernel: kvm_amd: Nested Paging enabled May 27 03:32:38.041911 kernel: kvm_amd: LBR virtualization supported May 27 03:32:38.041933 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 27 03:32:38.041958 kernel: kvm_amd: Virtual GIF supported May 27 03:32:38.005993 oslogin_cache_refresh[1557]: Refreshing passwd entry cache May 27 03:32:38.014446 systemd[1]: motdgen.service: Deactivated successfully. May 27 03:32:38.042490 google_oslogin_nss_cache[1557]: oslogin_cache_refresh[1557]: Failure getting users, quitting May 27 03:32:38.042490 google_oslogin_nss_cache[1557]: oslogin_cache_refresh[1557]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 03:32:38.042604 tar[1571]: linux-amd64/LICENSE May 27 03:32:38.042604 tar[1571]: linux-amd64/helm May 27 03:32:38.018076 oslogin_cache_refresh[1557]: Failure getting users, quitting May 27 03:32:38.016956 (ntainerd)[1582]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 27 03:32:38.018097 oslogin_cache_refresh[1557]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 03:32:38.017079 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 27 03:32:38.048110 jq[1585]: true May 27 03:32:38.058882 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:32:38.092547 google_oslogin_nss_cache[1557]: oslogin_cache_refresh[1557]: Refreshing group entry cache May 27 03:32:38.092492 oslogin_cache_refresh[1557]: Refreshing group entry cache May 27 03:32:38.099837 dbus-daemon[1553]: [system] SELinux support is enabled May 27 03:32:38.099999 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 27 03:32:38.103402 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 27 03:32:38.103428 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 27 03:32:38.104875 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 27 03:32:38.104893 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 27 03:32:38.109485 google_oslogin_nss_cache[1557]: oslogin_cache_refresh[1557]: Failure getting groups, quitting May 27 03:32:38.109485 google_oslogin_nss_cache[1557]: oslogin_cache_refresh[1557]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 03:32:38.109176 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 27 03:32:38.106732 oslogin_cache_refresh[1557]: Failure getting groups, quitting May 27 03:32:38.109635 update_engine[1564]: I20250527 03:32:38.107542 1564 update_check_scheduler.cc:74] Next update check in 2m11s May 27 03:32:38.109463 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 27 03:32:38.106747 oslogin_cache_refresh[1557]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 03:32:38.111002 systemd[1]: Started update-engine.service - Update Engine. May 27 03:32:38.115972 kernel: EDAC MC: Ver: 3.0.0 May 27 03:32:38.115264 systemd-logind[1563]: Watching system buttons on /dev/input/event2 (Power Button) May 27 03:32:38.115290 systemd-logind[1563]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 27 03:32:38.115781 systemd-logind[1563]: New seat seat0. May 27 03:32:38.118330 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 27 03:32:38.124392 extend-filesystems[1556]: Resized partition /dev/vda9 May 27 03:32:38.125768 systemd[1]: Started systemd-logind.service - User Login Management. May 27 03:32:38.180515 extend-filesystems[1615]: resize2fs 1.47.2 (1-Jan-2025) May 27 03:32:38.260267 locksmithd[1608]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 27 03:32:38.273642 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 27 03:32:38.431029 sshd_keygen[1576]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 27 03:32:38.454263 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 27 03:32:38.471502 systemd[1]: Starting issuegen.service - Generate /run/issue... May 27 03:32:38.498284 systemd[1]: issuegen.service: Deactivated successfully. May 27 03:32:38.498559 systemd[1]: Finished issuegen.service - Generate /run/issue. May 27 03:32:38.515335 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:32:38.519816 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 27 03:32:38.580246 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 27 03:32:38.583229 systemd[1]: Started getty@tty1.service - Getty on tty1. May 27 03:32:38.585415 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 27 03:32:38.595920 systemd[1]: Reached target getty.target - Login Prompts. May 27 03:32:38.655356 tar[1571]: linux-amd64/README.md May 27 03:32:38.671643 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 27 03:32:39.218448 containerd[1582]: time="2025-05-27T03:32:38Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 27 03:32:39.219327 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 27 03:32:39.219556 extend-filesystems[1615]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 27 03:32:39.219556 extend-filesystems[1615]: old_desc_blocks = 1, new_desc_blocks = 1 May 27 03:32:39.219556 extend-filesystems[1615]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 27 03:32:39.224022 extend-filesystems[1556]: Resized filesystem in /dev/vda9 May 27 03:32:39.225157 containerd[1582]: time="2025-05-27T03:32:39.220430897Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 27 03:32:39.225369 systemd[1]: extend-filesystems.service: Deactivated successfully. May 27 03:32:39.225669 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 27 03:32:39.229181 containerd[1582]: time="2025-05-27T03:32:39.229128852Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.972µs" May 27 03:32:39.229181 containerd[1582]: time="2025-05-27T03:32:39.229171863Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 27 03:32:39.229234 containerd[1582]: time="2025-05-27T03:32:39.229191399Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 27 03:32:39.229409 containerd[1582]: time="2025-05-27T03:32:39.229382718Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 27 03:32:39.229409 containerd[1582]: time="2025-05-27T03:32:39.229403828Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 27 03:32:39.229449 containerd[1582]: time="2025-05-27T03:32:39.229428233Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 03:32:39.229514 containerd[1582]: time="2025-05-27T03:32:39.229489989Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 03:32:39.229514 containerd[1582]: time="2025-05-27T03:32:39.229504917Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 03:32:39.229822 containerd[1582]: time="2025-05-27T03:32:39.229800221Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 03:32:39.229822 containerd[1582]: time="2025-05-27T03:32:39.229819137Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 03:32:39.229868 containerd[1582]: time="2025-05-27T03:32:39.229830278Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 03:32:39.229868 containerd[1582]: time="2025-05-27T03:32:39.229838503Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 27 03:32:39.229943 containerd[1582]: time="2025-05-27T03:32:39.229924695Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 27 03:32:39.230176 containerd[1582]: time="2025-05-27T03:32:39.230149186Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 03:32:39.230199 containerd[1582]: time="2025-05-27T03:32:39.230182839Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 03:32:39.230199 containerd[1582]: time="2025-05-27T03:32:39.230193529Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 27 03:32:39.230236 containerd[1582]: time="2025-05-27T03:32:39.230225699Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 27 03:32:39.230436 containerd[1582]: time="2025-05-27T03:32:39.230418461Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 27 03:32:39.230498 containerd[1582]: time="2025-05-27T03:32:39.230481820Z" level=info msg="metadata content store policy set" policy=shared May 27 03:32:39.480574 bash[1606]: Updated "/home/core/.ssh/authorized_keys" May 27 03:32:39.482685 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 27 03:32:39.496379 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 27 03:32:39.672651 containerd[1582]: time="2025-05-27T03:32:39.672584097Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 27 03:32:39.672700 containerd[1582]: time="2025-05-27T03:32:39.672689094Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 27 03:32:39.672722 containerd[1582]: time="2025-05-27T03:32:39.672708460Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 27 03:32:39.672742 containerd[1582]: time="2025-05-27T03:32:39.672720794Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 27 03:32:39.673060 containerd[1582]: time="2025-05-27T03:32:39.672907203Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 27 03:32:39.673060 containerd[1582]: time="2025-05-27T03:32:39.672941297Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 27 03:32:39.673215 containerd[1582]: time="2025-05-27T03:32:39.673067714Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 27 03:32:39.673215 containerd[1582]: time="2025-05-27T03:32:39.673097420Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 27 03:32:39.673215 containerd[1582]: time="2025-05-27T03:32:39.673117658Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 27 03:32:39.673215 containerd[1582]: time="2025-05-27T03:32:39.673136634Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 27 03:32:39.673215 containerd[1582]: time="2025-05-27T03:32:39.673157613Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 27 03:32:39.673215 containerd[1582]: time="2025-05-27T03:32:39.673175637Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 27 03:32:39.673422 containerd[1582]: time="2025-05-27T03:32:39.673331399Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 27 03:32:39.673422 containerd[1582]: time="2025-05-27T03:32:39.673366405Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 27 03:32:39.673422 containerd[1582]: time="2025-05-27T03:32:39.673390600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 27 03:32:39.673422 containerd[1582]: time="2025-05-27T03:32:39.673410678Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 27 03:32:39.673636 containerd[1582]: time="2025-05-27T03:32:39.673423622Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 27 03:32:39.673636 containerd[1582]: time="2025-05-27T03:32:39.673438780Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 27 03:32:39.673636 containerd[1582]: time="2025-05-27T03:32:39.673455552Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 27 03:32:39.673636 containerd[1582]: time="2025-05-27T03:32:39.673471472Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 27 03:32:39.673636 containerd[1582]: time="2025-05-27T03:32:39.673499875Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 27 03:32:39.673636 containerd[1582]: time="2025-05-27T03:32:39.673520654Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 27 03:32:39.673636 containerd[1582]: time="2025-05-27T03:32:39.673539940Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 27 03:32:39.673825 containerd[1582]: time="2025-05-27T03:32:39.673725057Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 27 03:32:39.673825 containerd[1582]: time="2025-05-27T03:32:39.673762978Z" level=info msg="Start snapshots syncer" May 27 03:32:39.673825 containerd[1582]: time="2025-05-27T03:32:39.673801891Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 27 03:32:39.674334 containerd[1582]: time="2025-05-27T03:32:39.674164792Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 27 03:32:39.674541 containerd[1582]: time="2025-05-27T03:32:39.674346753Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 27 03:32:39.674541 containerd[1582]: time="2025-05-27T03:32:39.674432654Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 27 03:32:39.674585 containerd[1582]: time="2025-05-27T03:32:39.674565574Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 27 03:32:39.674606 containerd[1582]: time="2025-05-27T03:32:39.674588377Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 27 03:32:39.674606 containerd[1582]: time="2025-05-27T03:32:39.674600269Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 27 03:32:39.674656 containerd[1582]: time="2025-05-27T03:32:39.674627791Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 27 03:32:39.674656 containerd[1582]: time="2025-05-27T03:32:39.674641035Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 27 03:32:39.674656 containerd[1582]: time="2025-05-27T03:32:39.674651806Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 27 03:32:39.674723 containerd[1582]: time="2025-05-27T03:32:39.674662856Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 27 03:32:39.674723 containerd[1582]: time="2025-05-27T03:32:39.674687362Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 27 03:32:39.674723 containerd[1582]: time="2025-05-27T03:32:39.674699525Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 27 03:32:39.674723 containerd[1582]: time="2025-05-27T03:32:39.674716186Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 27 03:32:39.674794 containerd[1582]: time="2025-05-27T03:32:39.674750801Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 03:32:39.674794 containerd[1582]: time="2025-05-27T03:32:39.674765519Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 03:32:39.674794 containerd[1582]: time="2025-05-27T03:32:39.674775528Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 03:32:39.674794 containerd[1582]: time="2025-05-27T03:32:39.674784995Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 03:32:39.674794 containerd[1582]: time="2025-05-27T03:32:39.674792690Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 27 03:32:39.674896 containerd[1582]: time="2025-05-27T03:32:39.674802608Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 27 03:32:39.674896 containerd[1582]: time="2025-05-27T03:32:39.674813449Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 27 03:32:39.674896 containerd[1582]: time="2025-05-27T03:32:39.674830711Z" level=info msg="runtime interface created" May 27 03:32:39.674896 containerd[1582]: time="2025-05-27T03:32:39.674836051Z" level=info msg="created NRI interface" May 27 03:32:39.674896 containerd[1582]: time="2025-05-27T03:32:39.674845449Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 27 03:32:39.674896 containerd[1582]: time="2025-05-27T03:32:39.674858543Z" level=info msg="Connect containerd service" May 27 03:32:39.674896 containerd[1582]: time="2025-05-27T03:32:39.674880615Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 27 03:32:39.675850 containerd[1582]: time="2025-05-27T03:32:39.675816029Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 03:32:39.762590 containerd[1582]: time="2025-05-27T03:32:39.762420164Z" level=info msg="Start subscribing containerd event" May 27 03:32:39.762590 containerd[1582]: time="2025-05-27T03:32:39.762489885Z" level=info msg="Start recovering state" May 27 03:32:39.762748 containerd[1582]: time="2025-05-27T03:32:39.762600683Z" level=info msg="Start event monitor" May 27 03:32:39.762748 containerd[1582]: time="2025-05-27T03:32:39.762604790Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 27 03:32:39.762748 containerd[1582]: time="2025-05-27T03:32:39.762649244Z" level=info msg="Start cni network conf syncer for default" May 27 03:32:39.762748 containerd[1582]: time="2025-05-27T03:32:39.762675222Z" level=info msg="Start streaming server" May 27 03:32:39.762748 containerd[1582]: time="2025-05-27T03:32:39.762684850Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 27 03:32:39.762748 containerd[1582]: time="2025-05-27T03:32:39.762701502Z" level=info msg=serving... address=/run/containerd/containerd.sock May 27 03:32:39.762865 containerd[1582]: time="2025-05-27T03:32:39.762692024Z" level=info msg="runtime interface starting up..." May 27 03:32:39.762865 containerd[1582]: time="2025-05-27T03:32:39.762772104Z" level=info msg="starting plugins..." May 27 03:32:39.762865 containerd[1582]: time="2025-05-27T03:32:39.762787643Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 27 03:32:39.763044 containerd[1582]: time="2025-05-27T03:32:39.763015691Z" level=info msg="containerd successfully booted in 1.009835s" May 27 03:32:39.763136 systemd[1]: Started containerd.service - containerd container runtime. May 27 03:32:39.895902 systemd-networkd[1490]: eth0: Gained IPv6LL May 27 03:32:39.898834 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 27 03:32:39.900622 systemd[1]: Reached target network-online.target - Network is Online. May 27 03:32:39.903327 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 27 03:32:39.905785 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:32:39.924048 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 27 03:32:39.947820 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 27 03:32:39.949604 systemd[1]: coreos-metadata.service: Deactivated successfully. May 27 03:32:39.949902 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 27 03:32:39.952111 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 27 03:32:40.612802 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:32:40.614527 systemd[1]: Reached target multi-user.target - Multi-User System. May 27 03:32:40.615892 systemd[1]: Startup finished in 2.799s (kernel) + 5.771s (initrd) + 4.882s (userspace) = 13.453s. May 27 03:32:40.618404 (kubelet)[1691]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 03:32:41.015455 kubelet[1691]: E0527 03:32:41.015333 1691 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 03:32:41.018978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 03:32:41.019188 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 03:32:41.019567 systemd[1]: kubelet.service: Consumed 957ms CPU time, 264.6M memory peak. May 27 03:32:43.412876 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 27 03:32:43.414163 systemd[1]: Started sshd@0-10.0.0.8:22-10.0.0.1:52958.service - OpenSSH per-connection server daemon (10.0.0.1:52958). May 27 03:32:43.474219 sshd[1705]: Accepted publickey for core from 10.0.0.1 port 52958 ssh2: RSA SHA256:28Bggi7Fgl5ol89PGYBtCkx+o5rLsXiIKLwfpE1JZmQ May 27 03:32:43.475771 sshd-session[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:32:43.481652 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 27 03:32:43.482683 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 27 03:32:43.489484 systemd-logind[1563]: New session 1 of user core. May 27 03:32:43.511657 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 27 03:32:43.514816 systemd[1]: Starting user@500.service - User Manager for UID 500... May 27 03:32:43.541059 (systemd)[1709]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 27 03:32:43.543200 systemd-logind[1563]: New session c1 of user core. May 27 03:32:43.689379 systemd[1709]: Queued start job for default target default.target. May 27 03:32:43.711857 systemd[1709]: Created slice app.slice - User Application Slice. May 27 03:32:43.711881 systemd[1709]: Reached target paths.target - Paths. May 27 03:32:43.711921 systemd[1709]: Reached target timers.target - Timers. May 27 03:32:43.713407 systemd[1709]: Starting dbus.socket - D-Bus User Message Bus Socket... May 27 03:32:43.723410 systemd[1709]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 27 03:32:43.723474 systemd[1709]: Reached target sockets.target - Sockets. May 27 03:32:43.723513 systemd[1709]: Reached target basic.target - Basic System. May 27 03:32:43.723551 systemd[1709]: Reached target default.target - Main User Target. May 27 03:32:43.723583 systemd[1709]: Startup finished in 172ms. May 27 03:32:43.724020 systemd[1]: Started user@500.service - User Manager for UID 500. May 27 03:32:43.725543 systemd[1]: Started session-1.scope - Session 1 of User core. May 27 03:32:43.795569 systemd[1]: Started sshd@1-10.0.0.8:22-10.0.0.1:57650.service - OpenSSH per-connection server daemon (10.0.0.1:57650). May 27 03:32:43.850519 sshd[1720]: Accepted publickey for core from 10.0.0.1 port 57650 ssh2: RSA SHA256:28Bggi7Fgl5ol89PGYBtCkx+o5rLsXiIKLwfpE1JZmQ May 27 03:32:43.852204 sshd-session[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:32:43.856376 systemd-logind[1563]: New session 2 of user core. May 27 03:32:43.870763 systemd[1]: Started session-2.scope - Session 2 of User core. May 27 03:32:43.922966 sshd[1722]: Connection closed by 10.0.0.1 port 57650 May 27 03:32:43.923523 sshd-session[1720]: pam_unix(sshd:session): session closed for user core May 27 03:32:43.935147 systemd[1]: sshd@1-10.0.0.8:22-10.0.0.1:57650.service: Deactivated successfully. May 27 03:32:43.936932 systemd[1]: session-2.scope: Deactivated successfully. May 27 03:32:43.937626 systemd-logind[1563]: Session 2 logged out. Waiting for processes to exit. May 27 03:32:43.940129 systemd[1]: Started sshd@2-10.0.0.8:22-10.0.0.1:57664.service - OpenSSH per-connection server daemon (10.0.0.1:57664). May 27 03:32:43.940928 systemd-logind[1563]: Removed session 2. May 27 03:32:43.987468 sshd[1728]: Accepted publickey for core from 10.0.0.1 port 57664 ssh2: RSA SHA256:28Bggi7Fgl5ol89PGYBtCkx+o5rLsXiIKLwfpE1JZmQ May 27 03:32:43.988943 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:32:43.993365 systemd-logind[1563]: New session 3 of user core. May 27 03:32:44.002745 systemd[1]: Started session-3.scope - Session 3 of User core. May 27 03:32:44.051048 sshd[1730]: Connection closed by 10.0.0.1 port 57664 May 27 03:32:44.051332 sshd-session[1728]: pam_unix(sshd:session): session closed for user core May 27 03:32:44.064041 systemd[1]: sshd@2-10.0.0.8:22-10.0.0.1:57664.service: Deactivated successfully. May 27 03:32:44.065489 systemd[1]: session-3.scope: Deactivated successfully. May 27 03:32:44.066287 systemd-logind[1563]: Session 3 logged out. Waiting for processes to exit. May 27 03:32:44.069151 systemd[1]: Started sshd@3-10.0.0.8:22-10.0.0.1:57674.service - OpenSSH per-connection server daemon (10.0.0.1:57674). May 27 03:32:44.069712 systemd-logind[1563]: Removed session 3. May 27 03:32:44.112572 sshd[1736]: Accepted publickey for core from 10.0.0.1 port 57674 ssh2: RSA SHA256:28Bggi7Fgl5ol89PGYBtCkx+o5rLsXiIKLwfpE1JZmQ May 27 03:32:44.113923 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:32:44.118058 systemd-logind[1563]: New session 4 of user core. May 27 03:32:44.127745 systemd[1]: Started session-4.scope - Session 4 of User core. May 27 03:32:44.179827 sshd[1738]: Connection closed by 10.0.0.1 port 57674 May 27 03:32:44.180239 sshd-session[1736]: pam_unix(sshd:session): session closed for user core May 27 03:32:44.196219 systemd[1]: sshd@3-10.0.0.8:22-10.0.0.1:57674.service: Deactivated successfully. May 27 03:32:44.197851 systemd[1]: session-4.scope: Deactivated successfully. May 27 03:32:44.198605 systemd-logind[1563]: Session 4 logged out. Waiting for processes to exit. May 27 03:32:44.201386 systemd[1]: Started sshd@4-10.0.0.8:22-10.0.0.1:57684.service - OpenSSH per-connection server daemon (10.0.0.1:57684). May 27 03:32:44.201926 systemd-logind[1563]: Removed session 4. May 27 03:32:44.245839 sshd[1744]: Accepted publickey for core from 10.0.0.1 port 57684 ssh2: RSA SHA256:28Bggi7Fgl5ol89PGYBtCkx+o5rLsXiIKLwfpE1JZmQ May 27 03:32:44.247117 sshd-session[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:32:44.251103 systemd-logind[1563]: New session 5 of user core. May 27 03:32:44.260734 systemd[1]: Started session-5.scope - Session 5 of User core. May 27 03:32:44.317000 sudo[1747]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 27 03:32:44.317293 sudo[1747]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:32:44.339787 sudo[1747]: pam_unix(sudo:session): session closed for user root May 27 03:32:44.341319 sshd[1746]: Connection closed by 10.0.0.1 port 57684 May 27 03:32:44.341593 sshd-session[1744]: pam_unix(sshd:session): session closed for user core May 27 03:32:44.350453 systemd[1]: sshd@4-10.0.0.8:22-10.0.0.1:57684.service: Deactivated successfully. May 27 03:32:44.352413 systemd[1]: session-5.scope: Deactivated successfully. May 27 03:32:44.353176 systemd-logind[1563]: Session 5 logged out. Waiting for processes to exit. May 27 03:32:44.356224 systemd[1]: Started sshd@5-10.0.0.8:22-10.0.0.1:57686.service - OpenSSH per-connection server daemon (10.0.0.1:57686). May 27 03:32:44.356791 systemd-logind[1563]: Removed session 5. May 27 03:32:44.406051 sshd[1753]: Accepted publickey for core from 10.0.0.1 port 57686 ssh2: RSA SHA256:28Bggi7Fgl5ol89PGYBtCkx+o5rLsXiIKLwfpE1JZmQ May 27 03:32:44.407729 sshd-session[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:32:44.411985 systemd-logind[1563]: New session 6 of user core. May 27 03:32:44.427726 systemd[1]: Started session-6.scope - Session 6 of User core. May 27 03:32:44.480740 sudo[1757]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 27 03:32:44.481065 sudo[1757]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:32:44.881868 sudo[1757]: pam_unix(sudo:session): session closed for user root May 27 03:32:44.888259 sudo[1756]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 27 03:32:44.888660 sudo[1756]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:32:44.898759 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 03:32:44.940047 augenrules[1779]: No rules May 27 03:32:44.941924 systemd[1]: audit-rules.service: Deactivated successfully. May 27 03:32:44.942236 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 03:32:44.943577 sudo[1756]: pam_unix(sudo:session): session closed for user root May 27 03:32:44.945359 sshd[1755]: Connection closed by 10.0.0.1 port 57686 May 27 03:32:44.945706 sshd-session[1753]: pam_unix(sshd:session): session closed for user core May 27 03:32:44.956403 systemd[1]: sshd@5-10.0.0.8:22-10.0.0.1:57686.service: Deactivated successfully. May 27 03:32:44.958282 systemd[1]: session-6.scope: Deactivated successfully. May 27 03:32:44.959146 systemd-logind[1563]: Session 6 logged out. Waiting for processes to exit. May 27 03:32:44.961845 systemd[1]: Started sshd@6-10.0.0.8:22-10.0.0.1:57698.service - OpenSSH per-connection server daemon (10.0.0.1:57698). May 27 03:32:44.962401 systemd-logind[1563]: Removed session 6. May 27 03:32:45.011667 sshd[1788]: Accepted publickey for core from 10.0.0.1 port 57698 ssh2: RSA SHA256:28Bggi7Fgl5ol89PGYBtCkx+o5rLsXiIKLwfpE1JZmQ May 27 03:32:45.012842 sshd-session[1788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:32:45.017039 systemd-logind[1563]: New session 7 of user core. May 27 03:32:45.026765 systemd[1]: Started session-7.scope - Session 7 of User core. May 27 03:32:45.079076 sudo[1791]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 27 03:32:45.079406 sudo[1791]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:32:45.374382 systemd[1]: Starting docker.service - Docker Application Container Engine... May 27 03:32:45.390063 (dockerd)[1812]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 27 03:32:45.610545 dockerd[1812]: time="2025-05-27T03:32:45.610459865Z" level=info msg="Starting up" May 27 03:32:45.612204 dockerd[1812]: time="2025-05-27T03:32:45.612171435Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 27 03:32:46.403039 dockerd[1812]: time="2025-05-27T03:32:46.402980828Z" level=info msg="Loading containers: start." May 27 03:32:46.412647 kernel: Initializing XFRM netlink socket May 27 03:32:46.651748 systemd-networkd[1490]: docker0: Link UP May 27 03:32:46.657047 dockerd[1812]: time="2025-05-27T03:32:46.656967173Z" level=info msg="Loading containers: done." May 27 03:32:46.672826 dockerd[1812]: time="2025-05-27T03:32:46.672778721Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 27 03:32:46.672996 dockerd[1812]: time="2025-05-27T03:32:46.672847691Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 27 03:32:46.672996 dockerd[1812]: time="2025-05-27T03:32:46.672956254Z" level=info msg="Initializing buildkit" May 27 03:32:46.704683 dockerd[1812]: time="2025-05-27T03:32:46.704653419Z" level=info msg="Completed buildkit initialization" May 27 03:32:46.708693 dockerd[1812]: time="2025-05-27T03:32:46.708648192Z" level=info msg="Daemon has completed initialization" May 27 03:32:46.708759 dockerd[1812]: time="2025-05-27T03:32:46.708721159Z" level=info msg="API listen on /run/docker.sock" May 27 03:32:46.708889 systemd[1]: Started docker.service - Docker Application Container Engine. May 27 03:32:47.352184 containerd[1582]: time="2025-05-27T03:32:47.352132054Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\"" May 27 03:32:47.910425 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount29079800.mount: Deactivated successfully. May 27 03:32:48.827931 containerd[1582]: time="2025-05-27T03:32:48.827873153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:32:48.828594 containerd[1582]: time="2025-05-27T03:32:48.828543610Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.5: active requests=0, bytes read=28797811" May 27 03:32:48.829705 containerd[1582]: time="2025-05-27T03:32:48.829673079Z" level=info msg="ImageCreate event name:\"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:32:48.833353 containerd[1582]: time="2025-05-27T03:32:48.833297076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:32:48.834225 containerd[1582]: time="2025-05-27T03:32:48.834176776Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.5\" with image id \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\", size \"28794611\" in 1.482003746s" May 27 03:32:48.834271 containerd[1582]: time="2025-05-27T03:32:48.834224436Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\" returns image reference \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\"" May 27 03:32:48.834827 containerd[1582]: time="2025-05-27T03:32:48.834797781Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\"" May 27 03:32:49.993342 containerd[1582]: time="2025-05-27T03:32:49.993281836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:32:49.994052 containerd[1582]: time="2025-05-27T03:32:49.993998190Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.5: active requests=0, bytes read=24782523" May 27 03:32:49.995237 containerd[1582]: time="2025-05-27T03:32:49.995191849Z" level=info msg="ImageCreate event name:\"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:32:49.997563 containerd[1582]: time="2025-05-27T03:32:49.997518773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:32:49.998506 containerd[1582]: time="2025-05-27T03:32:49.998475207Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.5\" with image id \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\", size \"26384363\" in 1.163646328s" May 27 03:32:49.998553 containerd[1582]: time="2025-05-27T03:32:49.998505614Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\" returns image reference \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\"" May 27 03:32:49.999062 containerd[1582]: time="2025-05-27T03:32:49.999030719Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\"" May 27 03:32:51.269558 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 27 03:32:51.271082 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:32:52.083456 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:32:52.087323 (kubelet)[2090]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 03:32:52.383269 kubelet[2090]: E0527 03:32:52.383088 2090 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 03:32:52.389424 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 03:32:52.389647 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 03:32:52.390001 systemd[1]: kubelet.service: Consumed 209ms CPU time, 111M memory peak. May 27 03:32:52.724955 containerd[1582]: time="2025-05-27T03:32:52.724817206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:32:52.789816 containerd[1582]: time="2025-05-27T03:32:52.789727068Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.5: active requests=0, bytes read=19176063" May 27 03:32:52.835732 containerd[1582]: time="2025-05-27T03:32:52.835682948Z" level=info msg="ImageCreate event name:\"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:32:52.839204 containerd[1582]: time="2025-05-27T03:32:52.839163086Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:32:52.841622 containerd[1582]: time="2025-05-27T03:32:52.840197226Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.5\" with image id \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\", size \"20777921\" in 2.841133184s" May 27 03:32:52.841622 containerd[1582]: time="2025-05-27T03:32:52.840225088Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\" returns image reference \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\"" May 27 03:32:52.842087 containerd[1582]: time="2025-05-27T03:32:52.842056914Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\"" May 27 03:32:53.966179 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2093543326.mount: Deactivated successfully. May 27 03:32:54.925842 containerd[1582]: time="2025-05-27T03:32:54.925795291Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:32:54.927355 containerd[1582]: time="2025-05-27T03:32:54.927303530Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.5: active requests=0, bytes read=30892872" May 27 03:32:54.928495 containerd[1582]: time="2025-05-27T03:32:54.928454269Z" level=info msg="ImageCreate event name:\"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:32:54.930429 containerd[1582]: time="2025-05-27T03:32:54.930381704Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:32:54.930893 containerd[1582]: time="2025-05-27T03:32:54.930845274Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.5\" with image id \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\", repo tag \"registry.k8s.io/kube-proxy:v1.32.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\", size \"30891891\" in 2.08854842s" May 27 03:32:54.930893 containerd[1582]: time="2025-05-27T03:32:54.930888264Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\" returns image reference \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\"" May 27 03:32:54.931372 containerd[1582]: time="2025-05-27T03:32:54.931334391Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 27 03:32:55.449988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3902913547.mount: Deactivated successfully. May 27 03:32:56.151668 containerd[1582]: time="2025-05-27T03:32:56.151589220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:32:56.152376 containerd[1582]: time="2025-05-27T03:32:56.152317376Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 27 03:32:56.153503 containerd[1582]: time="2025-05-27T03:32:56.153445572Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:32:56.155813 containerd[1582]: time="2025-05-27T03:32:56.155776595Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:32:56.156684 containerd[1582]: time="2025-05-27T03:32:56.156647048Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.225281658s" May 27 03:32:56.156684 containerd[1582]: time="2025-05-27T03:32:56.156679198Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 27 03:32:56.157215 containerd[1582]: time="2025-05-27T03:32:56.157180318Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 27 03:32:56.615732 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2721106580.mount: Deactivated successfully. May 27 03:32:56.621002 containerd[1582]: time="2025-05-27T03:32:56.620960276Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 03:32:56.621724 containerd[1582]: time="2025-05-27T03:32:56.621659448Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 27 03:32:56.622866 containerd[1582]: time="2025-05-27T03:32:56.622802452Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 03:32:56.624642 containerd[1582]: time="2025-05-27T03:32:56.624578183Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 03:32:56.625198 containerd[1582]: time="2025-05-27T03:32:56.625148372Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 467.940513ms" May 27 03:32:56.625198 containerd[1582]: time="2025-05-27T03:32:56.625184169Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 27 03:32:56.625750 containerd[1582]: time="2025-05-27T03:32:56.625722860Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 27 03:32:57.137012 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1073883532.mount: Deactivated successfully. May 27 03:32:59.572150 containerd[1582]: time="2025-05-27T03:32:59.572089061Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:32:59.572726 containerd[1582]: time="2025-05-27T03:32:59.572692703Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" May 27 03:32:59.573822 containerd[1582]: time="2025-05-27T03:32:59.573796303Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:32:59.576343 containerd[1582]: time="2025-05-27T03:32:59.576298707Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:32:59.577132 containerd[1582]: time="2025-05-27T03:32:59.577099739Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.951348245s" May 27 03:32:59.577132 containerd[1582]: time="2025-05-27T03:32:59.577128994Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 27 03:33:01.386898 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:33:01.387067 systemd[1]: kubelet.service: Consumed 209ms CPU time, 111M memory peak. May 27 03:33:01.389438 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:33:01.412586 systemd[1]: Reload requested from client PID 2246 ('systemctl') (unit session-7.scope)... May 27 03:33:01.412601 systemd[1]: Reloading... May 27 03:33:01.503924 zram_generator::config[2288]: No configuration found. May 27 03:33:02.002106 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:33:02.116159 systemd[1]: Reloading finished in 703 ms. May 27 03:33:02.185248 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 27 03:33:02.185344 systemd[1]: kubelet.service: Failed with result 'signal'. May 27 03:33:02.185658 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:33:02.185698 systemd[1]: kubelet.service: Consumed 139ms CPU time, 98.3M memory peak. May 27 03:33:02.187173 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:33:02.356326 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:33:02.363985 (kubelet)[2336]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 03:33:02.399199 kubelet[2336]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:33:02.399199 kubelet[2336]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 03:33:02.399199 kubelet[2336]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:33:02.399432 kubelet[2336]: I0527 03:33:02.399258 2336 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 03:33:02.701939 kubelet[2336]: I0527 03:33:02.701854 2336 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 27 03:33:02.701939 kubelet[2336]: I0527 03:33:02.701882 2336 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 03:33:02.702115 kubelet[2336]: I0527 03:33:02.702091 2336 server.go:954] "Client rotation is on, will bootstrap in background" May 27 03:33:02.722868 kubelet[2336]: E0527 03:33:02.722831 2336 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.8:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" May 27 03:33:02.725990 kubelet[2336]: I0527 03:33:02.725955 2336 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 03:33:02.732582 kubelet[2336]: I0527 03:33:02.732562 2336 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 03:33:02.737436 kubelet[2336]: I0527 03:33:02.737422 2336 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 03:33:02.738499 kubelet[2336]: I0527 03:33:02.738458 2336 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 03:33:02.738689 kubelet[2336]: I0527 03:33:02.738490 2336 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 03:33:02.738792 kubelet[2336]: I0527 03:33:02.738689 2336 topology_manager.go:138] "Creating topology manager with none policy" May 27 03:33:02.738792 kubelet[2336]: I0527 03:33:02.738698 2336 container_manager_linux.go:304] "Creating device plugin manager" May 27 03:33:02.738834 kubelet[2336]: I0527 03:33:02.738822 2336 state_mem.go:36] "Initialized new in-memory state store" May 27 03:33:02.741672 kubelet[2336]: I0527 03:33:02.741641 2336 kubelet.go:446] "Attempting to sync node with API server" May 27 03:33:02.743019 kubelet[2336]: I0527 03:33:02.742978 2336 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 03:33:02.743019 kubelet[2336]: I0527 03:33:02.743028 2336 kubelet.go:352] "Adding apiserver pod source" May 27 03:33:02.743168 kubelet[2336]: I0527 03:33:02.743040 2336 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 03:33:02.745283 kubelet[2336]: W0527 03:33:02.745190 2336 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.8:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused May 27 03:33:02.745283 kubelet[2336]: W0527 03:33:02.745216 2336 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.8:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused May 27 03:33:02.745283 kubelet[2336]: E0527 03:33:02.745238 2336 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.8:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" May 27 03:33:02.745283 kubelet[2336]: E0527 03:33:02.745256 2336 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.8:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" May 27 03:33:02.745931 kubelet[2336]: I0527 03:33:02.745909 2336 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 03:33:02.746292 kubelet[2336]: I0527 03:33:02.746275 2336 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 27 03:33:02.746801 kubelet[2336]: W0527 03:33:02.746774 2336 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 27 03:33:02.749324 kubelet[2336]: I0527 03:33:02.749017 2336 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 03:33:02.749324 kubelet[2336]: I0527 03:33:02.749061 2336 server.go:1287] "Started kubelet" May 27 03:33:02.750111 kubelet[2336]: I0527 03:33:02.750062 2336 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 27 03:33:02.753058 kubelet[2336]: I0527 03:33:02.752990 2336 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 03:33:02.753413 kubelet[2336]: I0527 03:33:02.753369 2336 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 03:33:02.754413 kubelet[2336]: I0527 03:33:02.753889 2336 server.go:479] "Adding debug handlers to kubelet server" May 27 03:33:02.755206 kubelet[2336]: I0527 03:33:02.754966 2336 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 03:33:02.755471 kubelet[2336]: E0527 03:33:02.755453 2336 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:33:02.755568 kubelet[2336]: I0527 03:33:02.755559 2336 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 03:33:02.755835 kubelet[2336]: I0527 03:33:02.755817 2336 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 03:33:02.755892 kubelet[2336]: I0527 03:33:02.755567 2336 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 03:33:02.756063 kubelet[2336]: I0527 03:33:02.756053 2336 reconciler.go:26] "Reconciler: start to sync state" May 27 03:33:02.756456 kubelet[2336]: E0527 03:33:02.754484 2336 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.8:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.8:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184344d9d69c2965 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-27 03:33:02.749034853 +0000 UTC m=+0.381101609,LastTimestamp:2025-05-27 03:33:02.749034853 +0000 UTC m=+0.381101609,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 27 03:33:02.756840 kubelet[2336]: E0527 03:33:02.756810 2336 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.8:6443: connect: connection refused" interval="200ms" May 27 03:33:02.756964 kubelet[2336]: W0527 03:33:02.756927 2336 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.8:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused May 27 03:33:02.757033 kubelet[2336]: E0527 03:33:02.757016 2336 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.8:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" May 27 03:33:02.757154 kubelet[2336]: I0527 03:33:02.757129 2336 factory.go:221] Registration of the systemd container factory successfully May 27 03:33:02.757337 kubelet[2336]: I0527 03:33:02.757198 2336 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 03:33:02.758441 kubelet[2336]: E0527 03:33:02.758422 2336 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 03:33:02.759311 kubelet[2336]: I0527 03:33:02.759292 2336 factory.go:221] Registration of the containerd container factory successfully May 27 03:33:02.772515 kubelet[2336]: I0527 03:33:02.772394 2336 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 27 03:33:02.773704 kubelet[2336]: I0527 03:33:02.773677 2336 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 27 03:33:02.773748 kubelet[2336]: I0527 03:33:02.773709 2336 status_manager.go:227] "Starting to sync pod status with apiserver" May 27 03:33:02.773748 kubelet[2336]: I0527 03:33:02.773729 2336 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 03:33:02.773748 kubelet[2336]: I0527 03:33:02.773737 2336 kubelet.go:2382] "Starting kubelet main sync loop" May 27 03:33:02.773805 kubelet[2336]: E0527 03:33:02.773777 2336 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 03:33:02.774330 kubelet[2336]: W0527 03:33:02.774275 2336 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.8:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused May 27 03:33:02.774377 kubelet[2336]: E0527 03:33:02.774336 2336 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.8:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" May 27 03:33:02.775635 kubelet[2336]: I0527 03:33:02.775600 2336 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 03:33:02.775635 kubelet[2336]: I0527 03:33:02.775631 2336 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 03:33:02.775699 kubelet[2336]: I0527 03:33:02.775645 2336 state_mem.go:36] "Initialized new in-memory state store" May 27 03:33:02.856152 kubelet[2336]: E0527 03:33:02.856113 2336 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:33:02.874108 kubelet[2336]: E0527 03:33:02.874073 2336 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 27 03:33:02.956422 kubelet[2336]: E0527 03:33:02.956314 2336 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:33:02.957826 kubelet[2336]: E0527 03:33:02.957792 2336 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.8:6443: connect: connection refused" interval="400ms" May 27 03:33:03.057014 kubelet[2336]: E0527 03:33:03.056972 2336 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:33:03.074207 kubelet[2336]: E0527 03:33:03.074141 2336 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 27 03:33:03.157511 kubelet[2336]: E0527 03:33:03.157481 2336 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:33:03.258491 kubelet[2336]: E0527 03:33:03.258410 2336 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:33:03.358292 kubelet[2336]: E0527 03:33:03.358260 2336 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.8:6443: connect: connection refused" interval="800ms" May 27 03:33:03.359286 kubelet[2336]: E0527 03:33:03.359259 2336 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:33:03.459882 kubelet[2336]: E0527 03:33:03.459837 2336 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:33:03.475108 kubelet[2336]: E0527 03:33:03.475075 2336 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 27 03:33:03.560598 kubelet[2336]: E0527 03:33:03.560509 2336 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:33:03.612381 kubelet[2336]: W0527 03:33:03.612279 2336 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.8:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused May 27 03:33:03.612473 kubelet[2336]: E0527 03:33:03.612373 2336 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.8:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" May 27 03:33:03.644074 kubelet[2336]: W0527 03:33:03.644046 2336 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.8:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused May 27 03:33:03.644074 kubelet[2336]: E0527 03:33:03.644069 2336 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.8:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" May 27 03:33:03.660634 kubelet[2336]: E0527 03:33:03.660579 2336 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:33:03.761040 kubelet[2336]: E0527 03:33:03.760980 2336 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:33:03.800373 kubelet[2336]: I0527 03:33:03.800349 2336 policy_none.go:49] "None policy: Start" May 27 03:33:03.800373 kubelet[2336]: I0527 03:33:03.800368 2336 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 03:33:03.800443 kubelet[2336]: I0527 03:33:03.800381 2336 state_mem.go:35] "Initializing new in-memory state store" May 27 03:33:03.809622 kubelet[2336]: W0527 03:33:03.809578 2336 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.8:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused May 27 03:33:03.809664 kubelet[2336]: E0527 03:33:03.809637 2336 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.8:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" May 27 03:33:03.861155 kubelet[2336]: E0527 03:33:03.861077 2336 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:33:03.867301 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 27 03:33:03.883471 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 27 03:33:03.886766 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 27 03:33:03.912428 kubelet[2336]: I0527 03:33:03.912398 2336 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 27 03:33:03.912663 kubelet[2336]: I0527 03:33:03.912632 2336 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 03:33:03.912733 kubelet[2336]: I0527 03:33:03.912651 2336 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 03:33:03.912887 kubelet[2336]: I0527 03:33:03.912868 2336 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 03:33:03.913741 kubelet[2336]: E0527 03:33:03.913710 2336 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 03:33:03.913962 kubelet[2336]: E0527 03:33:03.913945 2336 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 27 03:33:04.013830 kubelet[2336]: I0527 03:33:04.013802 2336 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 03:33:04.014143 kubelet[2336]: E0527 03:33:04.014103 2336 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.8:6443/api/v1/nodes\": dial tcp 10.0.0.8:6443: connect: connection refused" node="localhost" May 27 03:33:04.151349 kubelet[2336]: W0527 03:33:04.151215 2336 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.8:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused May 27 03:33:04.151349 kubelet[2336]: E0527 03:33:04.151279 2336 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.8:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" May 27 03:33:04.159010 kubelet[2336]: E0527 03:33:04.158986 2336 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.8:6443: connect: connection refused" interval="1.6s" May 27 03:33:04.215237 kubelet[2336]: I0527 03:33:04.215211 2336 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 03:33:04.215512 kubelet[2336]: E0527 03:33:04.215467 2336 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.8:6443/api/v1/nodes\": dial tcp 10.0.0.8:6443: connect: connection refused" node="localhost" May 27 03:33:04.284357 systemd[1]: Created slice kubepods-burstable-pod70381e16927e63be92e5ec51c195c7cd.slice - libcontainer container kubepods-burstable-pod70381e16927e63be92e5ec51c195c7cd.slice. May 27 03:33:04.294400 kubelet[2336]: E0527 03:33:04.294372 2336 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:33:04.296220 systemd[1]: Created slice kubepods-burstable-pod7c751acbcd1525da2f1a64e395f86bdd.slice - libcontainer container kubepods-burstable-pod7c751acbcd1525da2f1a64e395f86bdd.slice. May 27 03:33:04.314630 kubelet[2336]: E0527 03:33:04.312871 2336 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:33:04.314515 systemd[1]: Created slice kubepods-burstable-pod447e79232307504a6964f3be51e3d64d.slice - libcontainer container kubepods-burstable-pod447e79232307504a6964f3be51e3d64d.slice. May 27 03:33:04.317120 kubelet[2336]: E0527 03:33:04.317093 2336 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:33:04.364622 kubelet[2336]: I0527 03:33:04.364588 2336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/70381e16927e63be92e5ec51c195c7cd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"70381e16927e63be92e5ec51c195c7cd\") " pod="kube-system/kube-apiserver-localhost" May 27 03:33:04.364684 kubelet[2336]: I0527 03:33:04.364642 2336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:33:04.364684 kubelet[2336]: I0527 03:33:04.364671 2336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:33:04.364749 kubelet[2336]: I0527 03:33:04.364694 2336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/447e79232307504a6964f3be51e3d64d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"447e79232307504a6964f3be51e3d64d\") " pod="kube-system/kube-scheduler-localhost" May 27 03:33:04.364749 kubelet[2336]: I0527 03:33:04.364715 2336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/70381e16927e63be92e5ec51c195c7cd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"70381e16927e63be92e5ec51c195c7cd\") " pod="kube-system/kube-apiserver-localhost" May 27 03:33:04.364749 kubelet[2336]: I0527 03:33:04.364736 2336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/70381e16927e63be92e5ec51c195c7cd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"70381e16927e63be92e5ec51c195c7cd\") " pod="kube-system/kube-apiserver-localhost" May 27 03:33:04.364839 kubelet[2336]: I0527 03:33:04.364758 2336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:33:04.364839 kubelet[2336]: I0527 03:33:04.364780 2336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:33:04.364839 kubelet[2336]: I0527 03:33:04.364809 2336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:33:04.595931 kubelet[2336]: E0527 03:33:04.595801 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:33:04.596558 containerd[1582]: time="2025-05-27T03:33:04.596524866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:70381e16927e63be92e5ec51c195c7cd,Namespace:kube-system,Attempt:0,}" May 27 03:33:04.613905 kubelet[2336]: E0527 03:33:04.613866 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:33:04.614339 containerd[1582]: time="2025-05-27T03:33:04.614295739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7c751acbcd1525da2f1a64e395f86bdd,Namespace:kube-system,Attempt:0,}" May 27 03:33:04.617221 kubelet[2336]: I0527 03:33:04.617202 2336 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 03:33:04.617384 kubelet[2336]: E0527 03:33:04.617343 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:33:04.617544 kubelet[2336]: E0527 03:33:04.617519 2336 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.8:6443/api/v1/nodes\": dial tcp 10.0.0.8:6443: connect: connection refused" node="localhost" May 27 03:33:04.617649 containerd[1582]: time="2025-05-27T03:33:04.617592172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:447e79232307504a6964f3be51e3d64d,Namespace:kube-system,Attempt:0,}" May 27 03:33:04.851572 containerd[1582]: time="2025-05-27T03:33:04.851410619Z" level=info msg="connecting to shim 99754dfe68292240d2619f0b1866281c09500611956a9fc2e68dcc4d085abdce" address="unix:///run/containerd/s/544e994966db85d72086dcd1fbb92add95a4b72525eca307ada13f12bc611925" namespace=k8s.io protocol=ttrpc version=3 May 27 03:33:04.854885 containerd[1582]: time="2025-05-27T03:33:04.854774689Z" level=info msg="connecting to shim 02fc6736a3072015b058519ea44d790d6b54b24e2a938d7e1afe1c1d67c8f49a" address="unix:///run/containerd/s/1238c45a90f054f1fcb888923cfb8dbab67f995bbf0b2aead629fe63c6b3c99c" namespace=k8s.io protocol=ttrpc version=3 May 27 03:33:04.864377 kubelet[2336]: E0527 03:33:04.861666 2336 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.8:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" May 27 03:33:04.864471 containerd[1582]: time="2025-05-27T03:33:04.863278399Z" level=info msg="connecting to shim 9314112bbf8bc8f64ec541db77ee2ab74c2a44db7b63dbce18c472887b75c576" address="unix:///run/containerd/s/c62cb3ec09da9be2f78b18cdf4953b5ef74c509891fb687fe3d0555a0c72d22c" namespace=k8s.io protocol=ttrpc version=3 May 27 03:33:04.876889 systemd[1]: Started cri-containerd-99754dfe68292240d2619f0b1866281c09500611956a9fc2e68dcc4d085abdce.scope - libcontainer container 99754dfe68292240d2619f0b1866281c09500611956a9fc2e68dcc4d085abdce. May 27 03:33:04.882443 systemd[1]: Started cri-containerd-02fc6736a3072015b058519ea44d790d6b54b24e2a938d7e1afe1c1d67c8f49a.scope - libcontainer container 02fc6736a3072015b058519ea44d790d6b54b24e2a938d7e1afe1c1d67c8f49a. May 27 03:33:04.889149 systemd[1]: Started cri-containerd-9314112bbf8bc8f64ec541db77ee2ab74c2a44db7b63dbce18c472887b75c576.scope - libcontainer container 9314112bbf8bc8f64ec541db77ee2ab74c2a44db7b63dbce18c472887b75c576. May 27 03:33:04.929687 containerd[1582]: time="2025-05-27T03:33:04.929636117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:70381e16927e63be92e5ec51c195c7cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"02fc6736a3072015b058519ea44d790d6b54b24e2a938d7e1afe1c1d67c8f49a\"" May 27 03:33:04.930503 kubelet[2336]: E0527 03:33:04.930468 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:33:04.932874 containerd[1582]: time="2025-05-27T03:33:04.932839175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7c751acbcd1525da2f1a64e395f86bdd,Namespace:kube-system,Attempt:0,} returns sandbox id \"99754dfe68292240d2619f0b1866281c09500611956a9fc2e68dcc4d085abdce\"" May 27 03:33:04.933436 containerd[1582]: time="2025-05-27T03:33:04.933407031Z" level=info msg="CreateContainer within sandbox \"02fc6736a3072015b058519ea44d790d6b54b24e2a938d7e1afe1c1d67c8f49a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 27 03:33:04.933601 kubelet[2336]: E0527 03:33:04.933579 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:33:04.935493 containerd[1582]: time="2025-05-27T03:33:04.935207137Z" level=info msg="CreateContainer within sandbox \"99754dfe68292240d2619f0b1866281c09500611956a9fc2e68dcc4d085abdce\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 27 03:33:04.943267 containerd[1582]: time="2025-05-27T03:33:04.943220848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:447e79232307504a6964f3be51e3d64d,Namespace:kube-system,Attempt:0,} returns sandbox id \"9314112bbf8bc8f64ec541db77ee2ab74c2a44db7b63dbce18c472887b75c576\"" May 27 03:33:04.943937 kubelet[2336]: E0527 03:33:04.943914 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:33:04.945714 containerd[1582]: time="2025-05-27T03:33:04.945685802Z" level=info msg="CreateContainer within sandbox \"9314112bbf8bc8f64ec541db77ee2ab74c2a44db7b63dbce18c472887b75c576\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 27 03:33:04.947977 containerd[1582]: time="2025-05-27T03:33:04.947954497Z" level=info msg="Container de4ed725ece2cca85fdea0da2cbf4aef24ff310c834f67bd648b7ab0c432b796: CDI devices from CRI Config.CDIDevices: []" May 27 03:33:04.949663 containerd[1582]: time="2025-05-27T03:33:04.949587571Z" level=info msg="Container cc092e97beb6488c18ab450891ca8977bbd97bc5624f1cd15dce06908ef6220b: CDI devices from CRI Config.CDIDevices: []" May 27 03:33:04.960747 containerd[1582]: time="2025-05-27T03:33:04.960492175Z" level=info msg="CreateContainer within sandbox \"02fc6736a3072015b058519ea44d790d6b54b24e2a938d7e1afe1c1d67c8f49a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"de4ed725ece2cca85fdea0da2cbf4aef24ff310c834f67bd648b7ab0c432b796\"" May 27 03:33:04.961868 containerd[1582]: time="2025-05-27T03:33:04.961839111Z" level=info msg="StartContainer for \"de4ed725ece2cca85fdea0da2cbf4aef24ff310c834f67bd648b7ab0c432b796\"" May 27 03:33:04.962591 containerd[1582]: time="2025-05-27T03:33:04.962570904Z" level=info msg="CreateContainer within sandbox \"99754dfe68292240d2619f0b1866281c09500611956a9fc2e68dcc4d085abdce\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"cc092e97beb6488c18ab450891ca8977bbd97bc5624f1cd15dce06908ef6220b\"" May 27 03:33:04.963650 containerd[1582]: time="2025-05-27T03:33:04.962915430Z" level=info msg="Container 384c74b9f515749c5e4d3ca220b76efb6ed21ba462a8fc51b4a91c6c113ffe58: CDI devices from CRI Config.CDIDevices: []" May 27 03:33:04.963791 containerd[1582]: time="2025-05-27T03:33:04.962984890Z" level=info msg="connecting to shim de4ed725ece2cca85fdea0da2cbf4aef24ff310c834f67bd648b7ab0c432b796" address="unix:///run/containerd/s/1238c45a90f054f1fcb888923cfb8dbab67f995bbf0b2aead629fe63c6b3c99c" protocol=ttrpc version=3 May 27 03:33:04.964516 containerd[1582]: time="2025-05-27T03:33:04.964084002Z" level=info msg="StartContainer for \"cc092e97beb6488c18ab450891ca8977bbd97bc5624f1cd15dce06908ef6220b\"" May 27 03:33:04.966144 containerd[1582]: time="2025-05-27T03:33:04.966115122Z" level=info msg="connecting to shim cc092e97beb6488c18ab450891ca8977bbd97bc5624f1cd15dce06908ef6220b" address="unix:///run/containerd/s/544e994966db85d72086dcd1fbb92add95a4b72525eca307ada13f12bc611925" protocol=ttrpc version=3 May 27 03:33:04.972662 containerd[1582]: time="2025-05-27T03:33:04.972597761Z" level=info msg="CreateContainer within sandbox \"9314112bbf8bc8f64ec541db77ee2ab74c2a44db7b63dbce18c472887b75c576\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"384c74b9f515749c5e4d3ca220b76efb6ed21ba462a8fc51b4a91c6c113ffe58\"" May 27 03:33:04.973239 containerd[1582]: time="2025-05-27T03:33:04.973215480Z" level=info msg="StartContainer for \"384c74b9f515749c5e4d3ca220b76efb6ed21ba462a8fc51b4a91c6c113ffe58\"" May 27 03:33:04.974163 containerd[1582]: time="2025-05-27T03:33:04.974128653Z" level=info msg="connecting to shim 384c74b9f515749c5e4d3ca220b76efb6ed21ba462a8fc51b4a91c6c113ffe58" address="unix:///run/containerd/s/c62cb3ec09da9be2f78b18cdf4953b5ef74c509891fb687fe3d0555a0c72d22c" protocol=ttrpc version=3 May 27 03:33:04.987779 systemd[1]: Started cri-containerd-de4ed725ece2cca85fdea0da2cbf4aef24ff310c834f67bd648b7ab0c432b796.scope - libcontainer container de4ed725ece2cca85fdea0da2cbf4aef24ff310c834f67bd648b7ab0c432b796. May 27 03:33:05.001989 systemd[1]: Started cri-containerd-384c74b9f515749c5e4d3ca220b76efb6ed21ba462a8fc51b4a91c6c113ffe58.scope - libcontainer container 384c74b9f515749c5e4d3ca220b76efb6ed21ba462a8fc51b4a91c6c113ffe58. May 27 03:33:05.004115 systemd[1]: Started cri-containerd-cc092e97beb6488c18ab450891ca8977bbd97bc5624f1cd15dce06908ef6220b.scope - libcontainer container cc092e97beb6488c18ab450891ca8977bbd97bc5624f1cd15dce06908ef6220b. May 27 03:33:05.049036 containerd[1582]: time="2025-05-27T03:33:05.048928065Z" level=info msg="StartContainer for \"de4ed725ece2cca85fdea0da2cbf4aef24ff310c834f67bd648b7ab0c432b796\" returns successfully" May 27 03:33:05.064674 containerd[1582]: time="2025-05-27T03:33:05.064586676Z" level=info msg="StartContainer for \"cc092e97beb6488c18ab450891ca8977bbd97bc5624f1cd15dce06908ef6220b\" returns successfully" May 27 03:33:05.064943 containerd[1582]: time="2025-05-27T03:33:05.064846884Z" level=info msg="StartContainer for \"384c74b9f515749c5e4d3ca220b76efb6ed21ba462a8fc51b4a91c6c113ffe58\" returns successfully" May 27 03:33:05.421272 kubelet[2336]: I0527 03:33:05.421239 2336 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 03:33:05.788023 kubelet[2336]: E0527 03:33:05.787581 2336 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:33:05.788971 kubelet[2336]: E0527 03:33:05.788473 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:33:05.788971 kubelet[2336]: E0527 03:33:05.788810 2336 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:33:05.788971 kubelet[2336]: E0527 03:33:05.788880 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:33:05.792064 kubelet[2336]: E0527 03:33:05.792015 2336 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:33:05.792255 kubelet[2336]: E0527 03:33:05.792192 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:33:05.878850 kubelet[2336]: E0527 03:33:05.878814 2336 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 27 03:33:05.977885 kubelet[2336]: I0527 03:33:05.977851 2336 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 27 03:33:05.977885 kubelet[2336]: E0527 03:33:05.977902 2336 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 27 03:33:05.989153 kubelet[2336]: E0527 03:33:05.989107 2336 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:33:06.089701 kubelet[2336]: E0527 03:33:06.089580 2336 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:33:06.190261 kubelet[2336]: E0527 03:33:06.190215 2336 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:33:06.291328 kubelet[2336]: E0527 03:33:06.291298 2336 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:33:06.392005 kubelet[2336]: E0527 03:33:06.391917 2336 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:33:06.492584 kubelet[2336]: E0527 03:33:06.492538 2336 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:33:06.593125 kubelet[2336]: E0527 03:33:06.593086 2336 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:33:06.656934 kubelet[2336]: I0527 03:33:06.656840 2336 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 27 03:33:06.660951 kubelet[2336]: E0527 03:33:06.660924 2336 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 27 03:33:06.660951 kubelet[2336]: I0527 03:33:06.660940 2336 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 27 03:33:06.662143 kubelet[2336]: E0527 03:33:06.662103 2336 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 27 03:33:06.662143 kubelet[2336]: I0527 03:33:06.662134 2336 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 27 03:33:06.665170 kubelet[2336]: E0527 03:33:06.665013 2336 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 27 03:33:06.745225 kubelet[2336]: I0527 03:33:06.745189 2336 apiserver.go:52] "Watching apiserver" May 27 03:33:06.756125 kubelet[2336]: I0527 03:33:06.756096 2336 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 03:33:06.792359 kubelet[2336]: I0527 03:33:06.792332 2336 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 27 03:33:06.793123 kubelet[2336]: I0527 03:33:06.792381 2336 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 27 03:33:06.793123 kubelet[2336]: I0527 03:33:06.792483 2336 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 27 03:33:06.793782 kubelet[2336]: E0527 03:33:06.793755 2336 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 27 03:33:06.793897 kubelet[2336]: E0527 03:33:06.793878 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:33:06.794519 kubelet[2336]: E0527 03:33:06.794496 2336 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 27 03:33:06.794601 kubelet[2336]: E0527 03:33:06.794582 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:33:06.794773 kubelet[2336]: E0527 03:33:06.794749 2336 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 27 03:33:06.794921 kubelet[2336]: E0527 03:33:06.794894 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:33:07.796184 systemd[1]: Reload requested from client PID 2614 ('systemctl') (unit session-7.scope)... May 27 03:33:07.796199 systemd[1]: Reloading... May 27 03:33:07.831293 kubelet[2336]: I0527 03:33:07.830907 2336 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 27 03:33:07.836235 kubelet[2336]: E0527 03:33:07.836200 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:33:07.873842 zram_generator::config[2657]: No configuration found. May 27 03:33:07.972537 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:33:08.100997 systemd[1]: Reloading finished in 304 ms. May 27 03:33:08.132157 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:33:08.154075 systemd[1]: kubelet.service: Deactivated successfully. May 27 03:33:08.154393 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:33:08.154452 systemd[1]: kubelet.service: Consumed 806ms CPU time, 131.6M memory peak. May 27 03:33:08.156387 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:33:08.354591 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:33:08.360520 (kubelet)[2702]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 03:33:08.400691 kubelet[2702]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:33:08.400691 kubelet[2702]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 03:33:08.400691 kubelet[2702]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:33:08.401086 kubelet[2702]: I0527 03:33:08.400766 2702 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 03:33:08.408817 kubelet[2702]: I0527 03:33:08.408783 2702 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 27 03:33:08.408817 kubelet[2702]: I0527 03:33:08.408803 2702 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 03:33:08.409057 kubelet[2702]: I0527 03:33:08.409031 2702 server.go:954] "Client rotation is on, will bootstrap in background" May 27 03:33:08.410148 kubelet[2702]: I0527 03:33:08.410119 2702 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 27 03:33:08.412195 kubelet[2702]: I0527 03:33:08.412147 2702 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 03:33:08.415783 kubelet[2702]: I0527 03:33:08.415758 2702 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 03:33:08.421338 kubelet[2702]: I0527 03:33:08.421310 2702 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 03:33:08.421568 kubelet[2702]: I0527 03:33:08.421527 2702 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 03:33:08.421732 kubelet[2702]: I0527 03:33:08.421556 2702 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 03:33:08.421732 kubelet[2702]: I0527 03:33:08.421734 2702 topology_manager.go:138] "Creating topology manager with none policy" May 27 03:33:08.421860 kubelet[2702]: I0527 03:33:08.421743 2702 container_manager_linux.go:304] "Creating device plugin manager" May 27 03:33:08.421860 kubelet[2702]: I0527 03:33:08.421791 2702 state_mem.go:36] "Initialized new in-memory state store" May 27 03:33:08.421961 kubelet[2702]: I0527 03:33:08.421944 2702 kubelet.go:446] "Attempting to sync node with API server" May 27 03:33:08.421994 kubelet[2702]: I0527 03:33:08.421969 2702 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 03:33:08.421994 kubelet[2702]: I0527 03:33:08.421991 2702 kubelet.go:352] "Adding apiserver pod source" May 27 03:33:08.422043 kubelet[2702]: I0527 03:33:08.422001 2702 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 03:33:08.423231 kubelet[2702]: I0527 03:33:08.423190 2702 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 03:33:08.423698 kubelet[2702]: I0527 03:33:08.423675 2702 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 27 03:33:08.424256 kubelet[2702]: I0527 03:33:08.424160 2702 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 03:33:08.424256 kubelet[2702]: I0527 03:33:08.424220 2702 server.go:1287] "Started kubelet" May 27 03:33:08.426697 kubelet[2702]: I0527 03:33:08.426663 2702 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 27 03:33:08.427546 kubelet[2702]: I0527 03:33:08.427523 2702 server.go:479] "Adding debug handlers to kubelet server" May 27 03:33:08.427806 kubelet[2702]: I0527 03:33:08.427792 2702 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 03:33:08.428347 kubelet[2702]: I0527 03:33:08.428298 2702 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 03:33:08.428520 kubelet[2702]: I0527 03:33:08.428497 2702 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 03:33:08.428937 kubelet[2702]: I0527 03:33:08.428913 2702 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 03:33:08.430068 kubelet[2702]: E0527 03:33:08.430018 2702 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:33:08.430068 kubelet[2702]: I0527 03:33:08.430045 2702 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 03:33:08.430307 kubelet[2702]: I0527 03:33:08.430207 2702 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 03:33:08.430347 kubelet[2702]: I0527 03:33:08.430308 2702 reconciler.go:26] "Reconciler: start to sync state" May 27 03:33:08.433698 kubelet[2702]: E0527 03:33:08.433673 2702 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 03:33:08.435017 kubelet[2702]: I0527 03:33:08.434972 2702 factory.go:221] Registration of the containerd container factory successfully May 27 03:33:08.435017 kubelet[2702]: I0527 03:33:08.434986 2702 factory.go:221] Registration of the systemd container factory successfully May 27 03:33:08.435204 kubelet[2702]: I0527 03:33:08.435066 2702 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 03:33:08.443794 kubelet[2702]: I0527 03:33:08.443742 2702 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 27 03:33:08.445132 kubelet[2702]: I0527 03:33:08.445100 2702 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 27 03:33:08.445203 kubelet[2702]: I0527 03:33:08.445136 2702 status_manager.go:227] "Starting to sync pod status with apiserver" May 27 03:33:08.445203 kubelet[2702]: I0527 03:33:08.445157 2702 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 03:33:08.445203 kubelet[2702]: I0527 03:33:08.445166 2702 kubelet.go:2382] "Starting kubelet main sync loop" May 27 03:33:08.445275 kubelet[2702]: E0527 03:33:08.445210 2702 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 03:33:08.466906 kubelet[2702]: I0527 03:33:08.466870 2702 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 03:33:08.466906 kubelet[2702]: I0527 03:33:08.466887 2702 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 03:33:08.466906 kubelet[2702]: I0527 03:33:08.466904 2702 state_mem.go:36] "Initialized new in-memory state store" May 27 03:33:08.467047 kubelet[2702]: I0527 03:33:08.467028 2702 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 27 03:33:08.467080 kubelet[2702]: I0527 03:33:08.467040 2702 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 27 03:33:08.467080 kubelet[2702]: I0527 03:33:08.467057 2702 policy_none.go:49] "None policy: Start" May 27 03:33:08.467080 kubelet[2702]: I0527 03:33:08.467066 2702 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 03:33:08.467080 kubelet[2702]: I0527 03:33:08.467076 2702 state_mem.go:35] "Initializing new in-memory state store" May 27 03:33:08.467169 kubelet[2702]: I0527 03:33:08.467161 2702 state_mem.go:75] "Updated machine memory state" May 27 03:33:08.471415 kubelet[2702]: I0527 03:33:08.471377 2702 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 27 03:33:08.471576 kubelet[2702]: I0527 03:33:08.471555 2702 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 03:33:08.471623 kubelet[2702]: I0527 03:33:08.471574 2702 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 03:33:08.471783 kubelet[2702]: I0527 03:33:08.471745 2702 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 03:33:08.472740 kubelet[2702]: E0527 03:33:08.472715 2702 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 03:33:08.545918 kubelet[2702]: I0527 03:33:08.545869 2702 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 27 03:33:08.546022 kubelet[2702]: I0527 03:33:08.545936 2702 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 27 03:33:08.546022 kubelet[2702]: I0527 03:33:08.545959 2702 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 27 03:33:08.550963 kubelet[2702]: E0527 03:33:08.550921 2702 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 27 03:33:08.576276 kubelet[2702]: I0527 03:33:08.576256 2702 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 03:33:08.581101 kubelet[2702]: I0527 03:33:08.581071 2702 kubelet_node_status.go:124] "Node was previously registered" node="localhost" May 27 03:33:08.581249 kubelet[2702]: I0527 03:33:08.581125 2702 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 27 03:33:08.632057 kubelet[2702]: I0527 03:33:08.631955 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/70381e16927e63be92e5ec51c195c7cd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"70381e16927e63be92e5ec51c195c7cd\") " pod="kube-system/kube-apiserver-localhost" May 27 03:33:08.632057 kubelet[2702]: I0527 03:33:08.631982 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/70381e16927e63be92e5ec51c195c7cd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"70381e16927e63be92e5ec51c195c7cd\") " pod="kube-system/kube-apiserver-localhost" May 27 03:33:08.632057 kubelet[2702]: I0527 03:33:08.632005 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:33:08.632057 kubelet[2702]: I0527 03:33:08.632023 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:33:08.632057 kubelet[2702]: I0527 03:33:08.632040 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/447e79232307504a6964f3be51e3d64d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"447e79232307504a6964f3be51e3d64d\") " pod="kube-system/kube-scheduler-localhost" May 27 03:33:08.632342 kubelet[2702]: I0527 03:33:08.632057 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/70381e16927e63be92e5ec51c195c7cd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"70381e16927e63be92e5ec51c195c7cd\") " pod="kube-system/kube-apiserver-localhost" May 27 03:33:08.632342 kubelet[2702]: I0527 03:33:08.632077 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:33:08.632342 kubelet[2702]: I0527 03:33:08.632094 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:33:08.632342 kubelet[2702]: I0527 03:33:08.632124 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:33:08.850949 kubelet[2702]: E0527 03:33:08.850919 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:33:08.850949 kubelet[2702]: E0527 03:33:08.850956 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:33:08.851101 kubelet[2702]: E0527 03:33:08.851051 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:33:09.422371 kubelet[2702]: I0527 03:33:09.422338 2702 apiserver.go:52] "Watching apiserver" May 27 03:33:09.430841 kubelet[2702]: I0527 03:33:09.430802 2702 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 03:33:09.456098 kubelet[2702]: I0527 03:33:09.455955 2702 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 27 03:33:09.456098 kubelet[2702]: I0527 03:33:09.455973 2702 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 27 03:33:09.456177 kubelet[2702]: I0527 03:33:09.456156 2702 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 27 03:33:09.462828 kubelet[2702]: E0527 03:33:09.462754 2702 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 27 03:33:09.463144 kubelet[2702]: E0527 03:33:09.463090 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:33:09.464208 kubelet[2702]: E0527 03:33:09.463679 2702 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 27 03:33:09.464208 kubelet[2702]: E0527 03:33:09.463817 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:33:09.464208 kubelet[2702]: E0527 03:33:09.464154 2702 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 27 03:33:09.464304 kubelet[2702]: E0527 03:33:09.464253 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:33:09.481009 kubelet[2702]: I0527 03:33:09.480936 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.480918713 podStartE2EDuration="2.480918713s" podCreationTimestamp="2025-05-27 03:33:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:33:09.475126308 +0000 UTC m=+1.110226168" watchObservedRunningTime="2025-05-27 03:33:09.480918713 +0000 UTC m=+1.116018573" May 27 03:33:09.487941 kubelet[2702]: I0527 03:33:09.487907 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.487898695 podStartE2EDuration="1.487898695s" podCreationTimestamp="2025-05-27 03:33:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:33:09.487881433 +0000 UTC m=+1.122981303" watchObservedRunningTime="2025-05-27 03:33:09.487898695 +0000 UTC m=+1.122998555" May 27 03:33:09.488032 kubelet[2702]: I0527 03:33:09.488007 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.488003191 podStartE2EDuration="1.488003191s" podCreationTimestamp="2025-05-27 03:33:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:33:09.481069536 +0000 UTC m=+1.116169396" watchObservedRunningTime="2025-05-27 03:33:09.488003191 +0000 UTC m=+1.123103051" May 27 03:33:10.456949 kubelet[2702]: E0527 03:33:10.456917 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:33:10.456949 kubelet[2702]: E0527 03:33:10.456963 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:33:10.457382 kubelet[2702]: E0527 03:33:10.457143 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:33:12.779921 kubelet[2702]: E0527 03:33:12.779846 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:33:14.606187 kubelet[2702]: I0527 03:33:14.606152 2702 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 27 03:33:14.606568 containerd[1582]: time="2025-05-27T03:33:14.606395774Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 27 03:33:14.606815 kubelet[2702]: I0527 03:33:14.606727 2702 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 27 03:33:14.640280 kubelet[2702]: E0527 03:33:14.640191 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:33:15.464413 systemd[1]: Created slice kubepods-besteffort-pod97ad79f1_c737_4b3d_b339_0bda12e8877c.slice - libcontainer container kubepods-besteffort-pod97ad79f1_c737_4b3d_b339_0bda12e8877c.slice. May 27 03:33:15.477688 kubelet[2702]: I0527 03:33:15.477659 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/97ad79f1-c737-4b3d-b339-0bda12e8877c-kube-proxy\") pod \"kube-proxy-9lwvc\" (UID: \"97ad79f1-c737-4b3d-b339-0bda12e8877c\") " pod="kube-system/kube-proxy-9lwvc" May 27 03:33:15.477688 kubelet[2702]: I0527 03:33:15.477688 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/97ad79f1-c737-4b3d-b339-0bda12e8877c-xtables-lock\") pod \"kube-proxy-9lwvc\" (UID: \"97ad79f1-c737-4b3d-b339-0bda12e8877c\") " pod="kube-system/kube-proxy-9lwvc" May 27 03:33:15.477794 kubelet[2702]: I0527 03:33:15.477704 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/97ad79f1-c737-4b3d-b339-0bda12e8877c-lib-modules\") pod \"kube-proxy-9lwvc\" (UID: \"97ad79f1-c737-4b3d-b339-0bda12e8877c\") " pod="kube-system/kube-proxy-9lwvc" May 27 03:33:15.477794 kubelet[2702]: I0527 03:33:15.477719 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5r5j\" (UniqueName: \"kubernetes.io/projected/97ad79f1-c737-4b3d-b339-0bda12e8877c-kube-api-access-j5r5j\") pod \"kube-proxy-9lwvc\" (UID: \"97ad79f1-c737-4b3d-b339-0bda12e8877c\") " pod="kube-system/kube-proxy-9lwvc" May 27 03:33:15.577334 systemd[1]: Created slice kubepods-besteffort-pod5837bc4a_1960_4271_b3c2_db5d9c1864e6.slice - libcontainer container kubepods-besteffort-pod5837bc4a_1960_4271_b3c2_db5d9c1864e6.slice. May 27 03:33:15.578198 kubelet[2702]: I0527 03:33:15.578171 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76ggt\" (UniqueName: \"kubernetes.io/projected/5837bc4a-1960-4271-b3c2-db5d9c1864e6-kube-api-access-76ggt\") pod \"tigera-operator-844669ff44-rljmf\" (UID: \"5837bc4a-1960-4271-b3c2-db5d9c1864e6\") " pod="tigera-operator/tigera-operator-844669ff44-rljmf" May 27 03:33:15.578721 kubelet[2702]: I0527 03:33:15.578694 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5837bc4a-1960-4271-b3c2-db5d9c1864e6-var-lib-calico\") pod \"tigera-operator-844669ff44-rljmf\" (UID: \"5837bc4a-1960-4271-b3c2-db5d9c1864e6\") " pod="tigera-operator/tigera-operator-844669ff44-rljmf" May 27 03:33:15.778389 kubelet[2702]: E0527 03:33:15.778292 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:33:15.778968 containerd[1582]: time="2025-05-27T03:33:15.778873189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9lwvc,Uid:97ad79f1-c737-4b3d-b339-0bda12e8877c,Namespace:kube-system,Attempt:0,}" May 27 03:33:15.796308 containerd[1582]: time="2025-05-27T03:33:15.796265650Z" level=info msg="connecting to shim aaadc04127500c21282f3a96d4da195077eb3891d08bbbf1ec0c9c3019d55efc" address="unix:///run/containerd/s/b9b2c74b3d782782b1f109105c2104d28c9e389d50da534ff73bca58699db390" namespace=k8s.io protocol=ttrpc version=3 May 27 03:33:15.822768 systemd[1]: Started cri-containerd-aaadc04127500c21282f3a96d4da195077eb3891d08bbbf1ec0c9c3019d55efc.scope - libcontainer container aaadc04127500c21282f3a96d4da195077eb3891d08bbbf1ec0c9c3019d55efc. May 27 03:33:15.846770 containerd[1582]: time="2025-05-27T03:33:15.846734062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9lwvc,Uid:97ad79f1-c737-4b3d-b339-0bda12e8877c,Namespace:kube-system,Attempt:0,} returns sandbox id \"aaadc04127500c21282f3a96d4da195077eb3891d08bbbf1ec0c9c3019d55efc\"" May 27 03:33:15.847347 kubelet[2702]: E0527 03:33:15.847323 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:33:15.849124 containerd[1582]: time="2025-05-27T03:33:15.849087557Z" level=info msg="CreateContainer within sandbox \"aaadc04127500c21282f3a96d4da195077eb3891d08bbbf1ec0c9c3019d55efc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 27 03:33:15.861173 containerd[1582]: time="2025-05-27T03:33:15.860225516Z" level=info msg="Container 37acdf93c01ceb2d24f314a53afa94f1e0fdd1be66c736f8b7d9e649977d6959: CDI devices from CRI Config.CDIDevices: []" May 27 03:33:15.869600 containerd[1582]: time="2025-05-27T03:33:15.869557558Z" level=info msg="CreateContainer within sandbox \"aaadc04127500c21282f3a96d4da195077eb3891d08bbbf1ec0c9c3019d55efc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"37acdf93c01ceb2d24f314a53afa94f1e0fdd1be66c736f8b7d9e649977d6959\"" May 27 03:33:15.870254 containerd[1582]: time="2025-05-27T03:33:15.870199648Z" level=info msg="StartContainer for \"37acdf93c01ceb2d24f314a53afa94f1e0fdd1be66c736f8b7d9e649977d6959\"" May 27 03:33:15.871464 containerd[1582]: time="2025-05-27T03:33:15.871432747Z" level=info msg="connecting to shim 37acdf93c01ceb2d24f314a53afa94f1e0fdd1be66c736f8b7d9e649977d6959" address="unix:///run/containerd/s/b9b2c74b3d782782b1f109105c2104d28c9e389d50da534ff73bca58699db390" protocol=ttrpc version=3 May 27 03:33:15.881217 containerd[1582]: time="2025-05-27T03:33:15.881186156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-rljmf,Uid:5837bc4a-1960-4271-b3c2-db5d9c1864e6,Namespace:tigera-operator,Attempt:0,}" May 27 03:33:15.889753 systemd[1]: Started cri-containerd-37acdf93c01ceb2d24f314a53afa94f1e0fdd1be66c736f8b7d9e649977d6959.scope - libcontainer container 37acdf93c01ceb2d24f314a53afa94f1e0fdd1be66c736f8b7d9e649977d6959. May 27 03:33:15.903567 containerd[1582]: time="2025-05-27T03:33:15.903516850Z" level=info msg="connecting to shim 514db7209250cfd082922bab891c16e2428e6171330a782c088e8ba458d81f4a" address="unix:///run/containerd/s/6aacfed48df50582d070c4bcafe72be0a630a3c97ec2ace74160a76c60439766" namespace=k8s.io protocol=ttrpc version=3 May 27 03:33:15.927782 systemd[1]: Started cri-containerd-514db7209250cfd082922bab891c16e2428e6171330a782c088e8ba458d81f4a.scope - libcontainer container 514db7209250cfd082922bab891c16e2428e6171330a782c088e8ba458d81f4a. May 27 03:33:15.933024 containerd[1582]: time="2025-05-27T03:33:15.932955229Z" level=info msg="StartContainer for \"37acdf93c01ceb2d24f314a53afa94f1e0fdd1be66c736f8b7d9e649977d6959\" returns successfully" May 27 03:33:15.974478 containerd[1582]: time="2025-05-27T03:33:15.974405458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-rljmf,Uid:5837bc4a-1960-4271-b3c2-db5d9c1864e6,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"514db7209250cfd082922bab891c16e2428e6171330a782c088e8ba458d81f4a\"" May 27 03:33:15.976185 containerd[1582]: time="2025-05-27T03:33:15.976155940Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 27 03:33:16.466774 kubelet[2702]: E0527 03:33:16.466742 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:33:16.474567 kubelet[2702]: I0527 03:33:16.474468 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9lwvc" podStartSLOduration=1.474428741 podStartE2EDuration="1.474428741s" podCreationTimestamp="2025-05-27 03:33:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:33:16.474093751 +0000 UTC m=+8.109193611" watchObservedRunningTime="2025-05-27 03:33:16.474428741 +0000 UTC m=+8.109528601" May 27 03:33:17.821362 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4291033966.mount: Deactivated successfully. May 27 03:33:18.318113 containerd[1582]: time="2025-05-27T03:33:18.318059989Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:33:18.318736 containerd[1582]: time="2025-05-27T03:33:18.318704097Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=25055451" May 27 03:33:18.319830 containerd[1582]: time="2025-05-27T03:33:18.319799136Z" level=info msg="ImageCreate event name:\"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:33:18.321682 containerd[1582]: time="2025-05-27T03:33:18.321652311Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:33:18.322263 containerd[1582]: time="2025-05-27T03:33:18.322226185Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"25051446\" in 2.3460462s" May 27 03:33:18.322291 containerd[1582]: time="2025-05-27T03:33:18.322260901Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\"" May 27 03:33:18.324258 containerd[1582]: time="2025-05-27T03:33:18.324234304Z" level=info msg="CreateContainer within sandbox \"514db7209250cfd082922bab891c16e2428e6171330a782c088e8ba458d81f4a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 27 03:33:18.332377 containerd[1582]: time="2025-05-27T03:33:18.332338131Z" level=info msg="Container 9fc06d136c6ef6e1251f761bbe39c7d8940f0967b2452affe5176bad4822123f: CDI devices from CRI Config.CDIDevices: []" May 27 03:33:18.337543 containerd[1582]: time="2025-05-27T03:33:18.337502191Z" level=info msg="CreateContainer within sandbox \"514db7209250cfd082922bab891c16e2428e6171330a782c088e8ba458d81f4a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9fc06d136c6ef6e1251f761bbe39c7d8940f0967b2452affe5176bad4822123f\"" May 27 03:33:18.338009 containerd[1582]: time="2025-05-27T03:33:18.337955546Z" level=info msg="StartContainer for \"9fc06d136c6ef6e1251f761bbe39c7d8940f0967b2452affe5176bad4822123f\"" May 27 03:33:18.342217 containerd[1582]: time="2025-05-27T03:33:18.342181406Z" level=info msg="connecting to shim 9fc06d136c6ef6e1251f761bbe39c7d8940f0967b2452affe5176bad4822123f" address="unix:///run/containerd/s/6aacfed48df50582d070c4bcafe72be0a630a3c97ec2ace74160a76c60439766" protocol=ttrpc version=3 May 27 03:33:18.388840 systemd[1]: Started cri-containerd-9fc06d136c6ef6e1251f761bbe39c7d8940f0967b2452affe5176bad4822123f.scope - libcontainer container 9fc06d136c6ef6e1251f761bbe39c7d8940f0967b2452affe5176bad4822123f. May 27 03:33:18.416781 containerd[1582]: time="2025-05-27T03:33:18.416739053Z" level=info msg="StartContainer for \"9fc06d136c6ef6e1251f761bbe39c7d8940f0967b2452affe5176bad4822123f\" returns successfully" May 27 03:33:18.430508 kubelet[2702]: E0527 03:33:18.430460 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:33:18.470744 kubelet[2702]: E0527 03:33:18.470707 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:33:18.486051 kubelet[2702]: I0527 03:33:18.486000 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-844669ff44-rljmf" podStartSLOduration=1.138772663 podStartE2EDuration="3.485981854s" podCreationTimestamp="2025-05-27 03:33:15 +0000 UTC" firstStartedPulling="2025-05-27 03:33:15.975651634 +0000 UTC m=+7.610751494" lastFinishedPulling="2025-05-27 03:33:18.322860825 +0000 UTC m=+9.957960685" observedRunningTime="2025-05-27 03:33:18.485868208 +0000 UTC m=+10.120968068" watchObservedRunningTime="2025-05-27 03:33:18.485981854 +0000 UTC m=+10.121081714" May 27 03:33:22.783446 kubelet[2702]: E0527 03:33:22.783389 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:33:23.166702 update_engine[1564]: I20250527 03:33:23.166654 1564 update_attempter.cc:509] Updating boot flags... May 27 03:33:23.558468 kubelet[2702]: E0527 03:33:23.478388 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:33:24.064719 sudo[1791]: pam_unix(sudo:session): session closed for user root May 27 03:33:24.066994 sshd[1790]: Connection closed by 10.0.0.1 port 57698 May 27 03:33:24.067782 sshd-session[1788]: pam_unix(sshd:session): session closed for user core May 27 03:33:24.073010 systemd[1]: sshd@6-10.0.0.8:22-10.0.0.1:57698.service: Deactivated successfully. May 27 03:33:24.076054 systemd[1]: session-7.scope: Deactivated successfully. May 27 03:33:24.076436 systemd[1]: session-7.scope: Consumed 3.559s CPU time, 227.1M memory peak. May 27 03:33:24.080364 systemd-logind[1563]: Session 7 logged out. Waiting for processes to exit. May 27 03:33:24.082513 systemd-logind[1563]: Removed session 7. May 27 03:33:24.650573 kubelet[2702]: E0527 03:33:24.650526 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:33:27.141909 systemd[1]: Created slice kubepods-besteffort-pod6c6d2d00_2e32_49b4_a7c8_5cc78bdfbd7e.slice - libcontainer container kubepods-besteffort-pod6c6d2d00_2e32_49b4_a7c8_5cc78bdfbd7e.slice. May 27 03:33:27.155709 kubelet[2702]: I0527 03:33:27.155651 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6c6d2d00-2e32-49b4-a7c8-5cc78bdfbd7e-typha-certs\") pod \"calico-typha-858f4d4866-w82sc\" (UID: \"6c6d2d00-2e32-49b4-a7c8-5cc78bdfbd7e\") " pod="calico-system/calico-typha-858f4d4866-w82sc" May 27 03:33:27.155709 kubelet[2702]: I0527 03:33:27.155701 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg5ff\" (UniqueName: \"kubernetes.io/projected/6c6d2d00-2e32-49b4-a7c8-5cc78bdfbd7e-kube-api-access-kg5ff\") pod \"calico-typha-858f4d4866-w82sc\" (UID: \"6c6d2d00-2e32-49b4-a7c8-5cc78bdfbd7e\") " pod="calico-system/calico-typha-858f4d4866-w82sc" May 27 03:33:27.155709 kubelet[2702]: I0527 03:33:27.155721 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c6d2d00-2e32-49b4-a7c8-5cc78bdfbd7e-tigera-ca-bundle\") pod \"calico-typha-858f4d4866-w82sc\" (UID: \"6c6d2d00-2e32-49b4-a7c8-5cc78bdfbd7e\") " pod="calico-system/calico-typha-858f4d4866-w82sc" May 27 03:33:27.214221 systemd[1]: Created slice kubepods-besteffort-pod2228ba2e_2d29_41cb_9f0c_3f0deb3db7aa.slice - libcontainer container kubepods-besteffort-pod2228ba2e_2d29_41cb_9f0c_3f0deb3db7aa.slice. May 27 03:33:27.256089 kubelet[2702]: I0527 03:33:27.256032 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2228ba2e-2d29-41cb-9f0c-3f0deb3db7aa-tigera-ca-bundle\") pod \"calico-node-8lrxr\" (UID: \"2228ba2e-2d29-41cb-9f0c-3f0deb3db7aa\") " pod="calico-system/calico-node-8lrxr" May 27 03:33:27.256089 kubelet[2702]: I0527 03:33:27.256071 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2228ba2e-2d29-41cb-9f0c-3f0deb3db7aa-xtables-lock\") pod \"calico-node-8lrxr\" (UID: \"2228ba2e-2d29-41cb-9f0c-3f0deb3db7aa\") " pod="calico-system/calico-node-8lrxr" May 27 03:33:27.256089 kubelet[2702]: I0527 03:33:27.256085 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2228ba2e-2d29-41cb-9f0c-3f0deb3db7aa-policysync\") pod \"calico-node-8lrxr\" (UID: \"2228ba2e-2d29-41cb-9f0c-3f0deb3db7aa\") " pod="calico-system/calico-node-8lrxr" May 27 03:33:27.256295 kubelet[2702]: I0527 03:33:27.256109 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2228ba2e-2d29-41cb-9f0c-3f0deb3db7aa-cni-net-dir\") pod \"calico-node-8lrxr\" (UID: \"2228ba2e-2d29-41cb-9f0c-3f0deb3db7aa\") " pod="calico-system/calico-node-8lrxr" May 27 03:33:27.256295 kubelet[2702]: I0527 03:33:27.256133 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2228ba2e-2d29-41cb-9f0c-3f0deb3db7aa-cni-bin-dir\") pod \"calico-node-8lrxr\" (UID: \"2228ba2e-2d29-41cb-9f0c-3f0deb3db7aa\") " pod="calico-system/calico-node-8lrxr" May 27 03:33:27.256295 kubelet[2702]: I0527 03:33:27.256147 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2228ba2e-2d29-41cb-9f0c-3f0deb3db7aa-var-lib-calico\") pod \"calico-node-8lrxr\" (UID: \"2228ba2e-2d29-41cb-9f0c-3f0deb3db7aa\") " pod="calico-system/calico-node-8lrxr" May 27 03:33:27.256295 kubelet[2702]: I0527 03:33:27.256162 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2228ba2e-2d29-41cb-9f0c-3f0deb3db7aa-cni-log-dir\") pod \"calico-node-8lrxr\" (UID: \"2228ba2e-2d29-41cb-9f0c-3f0deb3db7aa\") " pod="calico-system/calico-node-8lrxr" May 27 03:33:27.256295 kubelet[2702]: I0527 03:33:27.256178 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2228ba2e-2d29-41cb-9f0c-3f0deb3db7aa-flexvol-driver-host\") pod \"calico-node-8lrxr\" (UID: \"2228ba2e-2d29-41cb-9f0c-3f0deb3db7aa\") " pod="calico-system/calico-node-8lrxr" May 27 03:33:27.256411 kubelet[2702]: I0527 03:33:27.256194 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2228ba2e-2d29-41cb-9f0c-3f0deb3db7aa-lib-modules\") pod \"calico-node-8lrxr\" (UID: \"2228ba2e-2d29-41cb-9f0c-3f0deb3db7aa\") " pod="calico-system/calico-node-8lrxr" May 27 03:33:27.256411 kubelet[2702]: I0527 03:33:27.256209 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2228ba2e-2d29-41cb-9f0c-3f0deb3db7aa-var-run-calico\") pod \"calico-node-8lrxr\" (UID: \"2228ba2e-2d29-41cb-9f0c-3f0deb3db7aa\") " pod="calico-system/calico-node-8lrxr" May 27 03:33:27.256411 kubelet[2702]: I0527 03:33:27.256224 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2228ba2e-2d29-41cb-9f0c-3f0deb3db7aa-node-certs\") pod \"calico-node-8lrxr\" (UID: \"2228ba2e-2d29-41cb-9f0c-3f0deb3db7aa\") " pod="calico-system/calico-node-8lrxr" May 27 03:33:27.256411 kubelet[2702]: I0527 03:33:27.256239 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9w65\" (UniqueName: \"kubernetes.io/projected/2228ba2e-2d29-41cb-9f0c-3f0deb3db7aa-kube-api-access-n9w65\") pod \"calico-node-8lrxr\" (UID: \"2228ba2e-2d29-41cb-9f0c-3f0deb3db7aa\") " pod="calico-system/calico-node-8lrxr" May 27 03:33:27.367169 kubelet[2702]: E0527 03:33:27.367105 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xkksp" podUID="3c1deac8-af48-4170-a1b6-cf33ec7da6f0" May 27 03:33:27.367466 kubelet[2702]: E0527 03:33:27.367439 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.367466 kubelet[2702]: W0527 03:33:27.367462 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.367532 kubelet[2702]: E0527 03:33:27.367504 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.373155 kubelet[2702]: E0527 03:33:27.373119 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.373155 kubelet[2702]: W0527 03:33:27.373142 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.373155 kubelet[2702]: E0527 03:33:27.373159 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.446539 kubelet[2702]: E0527 03:33:27.446504 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.446539 kubelet[2702]: W0527 03:33:27.446526 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.446687 kubelet[2702]: E0527 03:33:27.446548 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.446752 kubelet[2702]: E0527 03:33:27.446735 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.446752 kubelet[2702]: W0527 03:33:27.446745 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.446844 kubelet[2702]: E0527 03:33:27.446756 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.446976 kubelet[2702]: E0527 03:33:27.446924 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.446976 kubelet[2702]: W0527 03:33:27.446942 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.446976 kubelet[2702]: E0527 03:33:27.446951 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.448266 kubelet[2702]: E0527 03:33:27.448240 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.448266 kubelet[2702]: W0527 03:33:27.448257 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.448266 kubelet[2702]: E0527 03:33:27.448268 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.448544 kubelet[2702]: E0527 03:33:27.448514 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.448544 kubelet[2702]: W0527 03:33:27.448539 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.448595 kubelet[2702]: E0527 03:33:27.448564 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.448825 kubelet[2702]: E0527 03:33:27.448809 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.448825 kubelet[2702]: W0527 03:33:27.448820 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.448882 kubelet[2702]: E0527 03:33:27.448828 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.449086 kubelet[2702]: E0527 03:33:27.449045 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.449086 kubelet[2702]: W0527 03:33:27.449070 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.449236 kubelet[2702]: E0527 03:33:27.449096 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.449317 kubelet[2702]: E0527 03:33:27.449302 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.449317 kubelet[2702]: W0527 03:33:27.449313 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.449374 kubelet[2702]: E0527 03:33:27.449322 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.449496 kubelet[2702]: E0527 03:33:27.449439 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:33:27.449640 kubelet[2702]: E0527 03:33:27.449621 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.449640 kubelet[2702]: W0527 03:33:27.449636 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.449714 kubelet[2702]: E0527 03:33:27.449648 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.449957 containerd[1582]: time="2025-05-27T03:33:27.449919936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-858f4d4866-w82sc,Uid:6c6d2d00-2e32-49b4-a7c8-5cc78bdfbd7e,Namespace:calico-system,Attempt:0,}" May 27 03:33:27.450387 kubelet[2702]: E0527 03:33:27.449972 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.450387 kubelet[2702]: W0527 03:33:27.449979 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.450387 kubelet[2702]: E0527 03:33:27.449989 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.450387 kubelet[2702]: E0527 03:33:27.450117 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.450387 kubelet[2702]: W0527 03:33:27.450123 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.450387 kubelet[2702]: E0527 03:33:27.450130 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.450387 kubelet[2702]: E0527 03:33:27.450307 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.450387 kubelet[2702]: W0527 03:33:27.450317 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.450387 kubelet[2702]: E0527 03:33:27.450326 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.450695 kubelet[2702]: E0527 03:33:27.450522 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.450695 kubelet[2702]: W0527 03:33:27.450530 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.450695 kubelet[2702]: E0527 03:33:27.450538 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.451110 kubelet[2702]: E0527 03:33:27.451092 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.451110 kubelet[2702]: W0527 03:33:27.451106 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.451187 kubelet[2702]: E0527 03:33:27.451118 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.451300 kubelet[2702]: E0527 03:33:27.451285 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.451300 kubelet[2702]: W0527 03:33:27.451295 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.451357 kubelet[2702]: E0527 03:33:27.451304 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.451506 kubelet[2702]: E0527 03:33:27.451471 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.451506 kubelet[2702]: W0527 03:33:27.451492 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.451506 kubelet[2702]: E0527 03:33:27.451500 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.451776 kubelet[2702]: E0527 03:33:27.451758 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.451776 kubelet[2702]: W0527 03:33:27.451769 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.451776 kubelet[2702]: E0527 03:33:27.451777 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.451993 kubelet[2702]: E0527 03:33:27.451952 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.451993 kubelet[2702]: W0527 03:33:27.451972 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.451993 kubelet[2702]: E0527 03:33:27.451981 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.452180 kubelet[2702]: E0527 03:33:27.452165 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.452180 kubelet[2702]: W0527 03:33:27.452177 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.452248 kubelet[2702]: E0527 03:33:27.452188 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.452372 kubelet[2702]: E0527 03:33:27.452358 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.452372 kubelet[2702]: W0527 03:33:27.452367 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.452430 kubelet[2702]: E0527 03:33:27.452375 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.457565 kubelet[2702]: E0527 03:33:27.457545 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.457565 kubelet[2702]: W0527 03:33:27.457555 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.457565 kubelet[2702]: E0527 03:33:27.457566 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.457692 kubelet[2702]: I0527 03:33:27.457589 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3c1deac8-af48-4170-a1b6-cf33ec7da6f0-varrun\") pod \"csi-node-driver-xkksp\" (UID: \"3c1deac8-af48-4170-a1b6-cf33ec7da6f0\") " pod="calico-system/csi-node-driver-xkksp" May 27 03:33:27.457800 kubelet[2702]: E0527 03:33:27.457782 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.457800 kubelet[2702]: W0527 03:33:27.457792 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.457843 kubelet[2702]: E0527 03:33:27.457806 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.457843 kubelet[2702]: I0527 03:33:27.457818 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5l2s\" (UniqueName: \"kubernetes.io/projected/3c1deac8-af48-4170-a1b6-cf33ec7da6f0-kube-api-access-l5l2s\") pod \"csi-node-driver-xkksp\" (UID: \"3c1deac8-af48-4170-a1b6-cf33ec7da6f0\") " pod="calico-system/csi-node-driver-xkksp" May 27 03:33:27.458017 kubelet[2702]: E0527 03:33:27.457991 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.458017 kubelet[2702]: W0527 03:33:27.458008 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.458073 kubelet[2702]: E0527 03:33:27.458024 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.458207 kubelet[2702]: E0527 03:33:27.458183 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.458207 kubelet[2702]: W0527 03:33:27.458194 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.458266 kubelet[2702]: E0527 03:33:27.458208 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.458395 kubelet[2702]: E0527 03:33:27.458379 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.458395 kubelet[2702]: W0527 03:33:27.458390 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.458440 kubelet[2702]: E0527 03:33:27.458402 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.458440 kubelet[2702]: I0527 03:33:27.458428 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3c1deac8-af48-4170-a1b6-cf33ec7da6f0-socket-dir\") pod \"csi-node-driver-xkksp\" (UID: \"3c1deac8-af48-4170-a1b6-cf33ec7da6f0\") " pod="calico-system/csi-node-driver-xkksp" May 27 03:33:27.458645 kubelet[2702]: E0527 03:33:27.458628 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.458645 kubelet[2702]: W0527 03:33:27.458639 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.458707 kubelet[2702]: E0527 03:33:27.458661 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.458707 kubelet[2702]: I0527 03:33:27.458676 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3c1deac8-af48-4170-a1b6-cf33ec7da6f0-registration-dir\") pod \"csi-node-driver-xkksp\" (UID: \"3c1deac8-af48-4170-a1b6-cf33ec7da6f0\") " pod="calico-system/csi-node-driver-xkksp" May 27 03:33:27.458876 kubelet[2702]: E0527 03:33:27.458860 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.458876 kubelet[2702]: W0527 03:33:27.458871 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.458928 kubelet[2702]: E0527 03:33:27.458887 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.458928 kubelet[2702]: I0527 03:33:27.458903 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3c1deac8-af48-4170-a1b6-cf33ec7da6f0-kubelet-dir\") pod \"csi-node-driver-xkksp\" (UID: \"3c1deac8-af48-4170-a1b6-cf33ec7da6f0\") " pod="calico-system/csi-node-driver-xkksp" May 27 03:33:27.459072 kubelet[2702]: E0527 03:33:27.459058 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.459072 kubelet[2702]: W0527 03:33:27.459070 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.459113 kubelet[2702]: E0527 03:33:27.459083 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.459242 kubelet[2702]: E0527 03:33:27.459231 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.459242 kubelet[2702]: W0527 03:33:27.459239 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.459292 kubelet[2702]: E0527 03:33:27.459251 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.459414 kubelet[2702]: E0527 03:33:27.459404 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.459414 kubelet[2702]: W0527 03:33:27.459411 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.459453 kubelet[2702]: E0527 03:33:27.459424 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.459588 kubelet[2702]: E0527 03:33:27.459577 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.459588 kubelet[2702]: W0527 03:33:27.459585 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.459659 kubelet[2702]: E0527 03:33:27.459597 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.459778 kubelet[2702]: E0527 03:33:27.459767 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.459778 kubelet[2702]: W0527 03:33:27.459776 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.459817 kubelet[2702]: E0527 03:33:27.459788 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.459932 kubelet[2702]: E0527 03:33:27.459922 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.459932 kubelet[2702]: W0527 03:33:27.459929 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.459989 kubelet[2702]: E0527 03:33:27.459940 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.460096 kubelet[2702]: E0527 03:33:27.460085 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.460096 kubelet[2702]: W0527 03:33:27.460093 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.460135 kubelet[2702]: E0527 03:33:27.460100 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.460254 kubelet[2702]: E0527 03:33:27.460242 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.460254 kubelet[2702]: W0527 03:33:27.460249 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.460312 kubelet[2702]: E0527 03:33:27.460256 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.520521 containerd[1582]: time="2025-05-27T03:33:27.520480446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8lrxr,Uid:2228ba2e-2d29-41cb-9f0c-3f0deb3db7aa,Namespace:calico-system,Attempt:0,}" May 27 03:33:27.560361 kubelet[2702]: E0527 03:33:27.560329 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.560361 kubelet[2702]: W0527 03:33:27.560350 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.560361 kubelet[2702]: E0527 03:33:27.560369 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.560605 kubelet[2702]: E0527 03:33:27.560590 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.560605 kubelet[2702]: W0527 03:33:27.560600 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.560685 kubelet[2702]: E0527 03:33:27.560636 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.560832 kubelet[2702]: E0527 03:33:27.560814 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.560832 kubelet[2702]: W0527 03:33:27.560825 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.560885 kubelet[2702]: E0527 03:33:27.560833 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.561029 kubelet[2702]: E0527 03:33:27.561015 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.561029 kubelet[2702]: W0527 03:33:27.561025 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.561087 kubelet[2702]: E0527 03:33:27.561034 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.561213 kubelet[2702]: E0527 03:33:27.561190 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.561213 kubelet[2702]: W0527 03:33:27.561204 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.561263 kubelet[2702]: E0527 03:33:27.561253 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.561858 kubelet[2702]: E0527 03:33:27.561824 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.561858 kubelet[2702]: W0527 03:33:27.561849 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.562084 kubelet[2702]: E0527 03:33:27.561880 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.564046 kubelet[2702]: E0527 03:33:27.563825 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.564046 kubelet[2702]: W0527 03:33:27.563843 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.564046 kubelet[2702]: E0527 03:33:27.563897 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.564046 kubelet[2702]: E0527 03:33:27.564014 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.564046 kubelet[2702]: W0527 03:33:27.564022 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.564182 kubelet[2702]: E0527 03:33:27.564091 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.564182 kubelet[2702]: E0527 03:33:27.564169 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.564182 kubelet[2702]: W0527 03:33:27.564177 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.564246 kubelet[2702]: E0527 03:33:27.564226 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.564759 kubelet[2702]: E0527 03:33:27.564325 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.564759 kubelet[2702]: W0527 03:33:27.564336 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.564759 kubelet[2702]: E0527 03:33:27.564405 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.564759 kubelet[2702]: E0527 03:33:27.564484 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.564759 kubelet[2702]: W0527 03:33:27.564492 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.564759 kubelet[2702]: E0527 03:33:27.564632 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.564759 kubelet[2702]: W0527 03:33:27.564639 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.564759 kubelet[2702]: E0527 03:33:27.564651 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.564759 kubelet[2702]: E0527 03:33:27.564711 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.565729 kubelet[2702]: E0527 03:33:27.564864 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.565729 kubelet[2702]: W0527 03:33:27.564872 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.565729 kubelet[2702]: E0527 03:33:27.564888 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.565729 kubelet[2702]: E0527 03:33:27.565043 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.565729 kubelet[2702]: W0527 03:33:27.565050 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.565729 kubelet[2702]: E0527 03:33:27.565059 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.565729 kubelet[2702]: E0527 03:33:27.565177 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.565729 kubelet[2702]: W0527 03:33:27.565182 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.565729 kubelet[2702]: E0527 03:33:27.565191 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.565729 kubelet[2702]: E0527 03:33:27.565335 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.565938 kubelet[2702]: W0527 03:33:27.565341 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.565938 kubelet[2702]: E0527 03:33:27.565349 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.565938 kubelet[2702]: E0527 03:33:27.565482 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.565938 kubelet[2702]: W0527 03:33:27.565489 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.565938 kubelet[2702]: E0527 03:33:27.565496 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.565938 kubelet[2702]: E0527 03:33:27.565625 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.565938 kubelet[2702]: W0527 03:33:27.565631 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.565938 kubelet[2702]: E0527 03:33:27.565638 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.565938 kubelet[2702]: E0527 03:33:27.565798 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.565938 kubelet[2702]: W0527 03:33:27.565805 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.566132 kubelet[2702]: E0527 03:33:27.565848 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.566132 kubelet[2702]: E0527 03:33:27.565953 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.566132 kubelet[2702]: W0527 03:33:27.565962 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.566132 kubelet[2702]: E0527 03:33:27.566081 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.566132 kubelet[2702]: W0527 03:33:27.566087 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.566132 kubelet[2702]: E0527 03:33:27.566095 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.566132 kubelet[2702]: E0527 03:33:27.566095 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.566268 kubelet[2702]: E0527 03:33:27.566229 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.566268 kubelet[2702]: W0527 03:33:27.566238 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.566268 kubelet[2702]: E0527 03:33:27.566247 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.567268 kubelet[2702]: E0527 03:33:27.566394 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.567268 kubelet[2702]: W0527 03:33:27.566408 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.567268 kubelet[2702]: E0527 03:33:27.566415 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.567268 kubelet[2702]: E0527 03:33:27.566553 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.567268 kubelet[2702]: W0527 03:33:27.566559 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.567268 kubelet[2702]: E0527 03:33:27.566566 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.567268 kubelet[2702]: E0527 03:33:27.567061 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.567268 kubelet[2702]: W0527 03:33:27.567069 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.567268 kubelet[2702]: E0527 03:33:27.567077 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.578179 kubelet[2702]: E0527 03:33:27.578084 2702 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:33:27.578179 kubelet[2702]: W0527 03:33:27.578109 2702 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:33:27.578179 kubelet[2702]: E0527 03:33:27.578132 2702 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:33:27.604402 containerd[1582]: time="2025-05-27T03:33:27.604349676Z" level=info msg="connecting to shim 8a7219cf3c654634408c54408297d5d749ea3cbec4bbec8513046acc611c5d91" address="unix:///run/containerd/s/b58bf23adadf25bbbdc7fd277ced7b14fa770b19e5c1db37201ea5205e676c18" namespace=k8s.io protocol=ttrpc version=3 May 27 03:33:27.605044 containerd[1582]: time="2025-05-27T03:33:27.604993365Z" level=info msg="connecting to shim 305afd762bafff27252505b640ae61ce5f25beec7c467810f73e010b0dc47662" address="unix:///run/containerd/s/6685c6cc69360bfd5bbba9a37930c4913ec3c1b8533b1a4747e0b45300fbd696" namespace=k8s.io protocol=ttrpc version=3 May 27 03:33:27.640756 systemd[1]: Started cri-containerd-305afd762bafff27252505b640ae61ce5f25beec7c467810f73e010b0dc47662.scope - libcontainer container 305afd762bafff27252505b640ae61ce5f25beec7c467810f73e010b0dc47662. May 27 03:33:27.642718 systemd[1]: Started cri-containerd-8a7219cf3c654634408c54408297d5d749ea3cbec4bbec8513046acc611c5d91.scope - libcontainer container 8a7219cf3c654634408c54408297d5d749ea3cbec4bbec8513046acc611c5d91. May 27 03:33:27.704383 containerd[1582]: time="2025-05-27T03:33:27.704074286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8lrxr,Uid:2228ba2e-2d29-41cb-9f0c-3f0deb3db7aa,Namespace:calico-system,Attempt:0,} returns sandbox id \"8a7219cf3c654634408c54408297d5d749ea3cbec4bbec8513046acc611c5d91\"" May 27 03:33:27.706779 containerd[1582]: time="2025-05-27T03:33:27.706665302Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 27 03:33:27.710136 containerd[1582]: time="2025-05-27T03:33:27.710090798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-858f4d4866-w82sc,Uid:6c6d2d00-2e32-49b4-a7c8-5cc78bdfbd7e,Namespace:calico-system,Attempt:0,} returns sandbox id \"305afd762bafff27252505b640ae61ce5f25beec7c467810f73e010b0dc47662\"" May 27 03:33:27.710787 kubelet[2702]: E0527 03:33:27.710764 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:33:29.181509 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2586531853.mount: Deactivated successfully. May 27 03:33:29.242912 containerd[1582]: time="2025-05-27T03:33:29.242860351Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:33:29.243706 containerd[1582]: time="2025-05-27T03:33:29.243681564Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=5934460" May 27 03:33:29.244981 containerd[1582]: time="2025-05-27T03:33:29.244923823Z" level=info msg="ImageCreate event name:\"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:33:29.246660 containerd[1582]: time="2025-05-27T03:33:29.246588401Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:33:29.247104 containerd[1582]: time="2025-05-27T03:33:29.247081364Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5934282\" in 1.540384723s" May 27 03:33:29.247191 containerd[1582]: time="2025-05-27T03:33:29.247107914Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\"" May 27 03:33:29.247877 containerd[1582]: time="2025-05-27T03:33:29.247826903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 27 03:33:29.249535 containerd[1582]: time="2025-05-27T03:33:29.249503063Z" level=info msg="CreateContainer within sandbox \"8a7219cf3c654634408c54408297d5d749ea3cbec4bbec8513046acc611c5d91\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 27 03:33:29.259545 containerd[1582]: time="2025-05-27T03:33:29.259500882Z" level=info msg="Container ca4bc79829eaf8742cf4123125ddca0fede36e7ce68c66f6ddf37a0c6a106486: CDI devices from CRI Config.CDIDevices: []" May 27 03:33:29.267131 containerd[1582]: time="2025-05-27T03:33:29.267092120Z" level=info msg="CreateContainer within sandbox \"8a7219cf3c654634408c54408297d5d749ea3cbec4bbec8513046acc611c5d91\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ca4bc79829eaf8742cf4123125ddca0fede36e7ce68c66f6ddf37a0c6a106486\"" May 27 03:33:29.267647 containerd[1582]: time="2025-05-27T03:33:29.267501894Z" level=info msg="StartContainer for \"ca4bc79829eaf8742cf4123125ddca0fede36e7ce68c66f6ddf37a0c6a106486\"" May 27 03:33:29.268839 containerd[1582]: time="2025-05-27T03:33:29.268817121Z" level=info msg="connecting to shim ca4bc79829eaf8742cf4123125ddca0fede36e7ce68c66f6ddf37a0c6a106486" address="unix:///run/containerd/s/b58bf23adadf25bbbdc7fd277ced7b14fa770b19e5c1db37201ea5205e676c18" protocol=ttrpc version=3 May 27 03:33:29.295274 systemd[1]: Started cri-containerd-ca4bc79829eaf8742cf4123125ddca0fede36e7ce68c66f6ddf37a0c6a106486.scope - libcontainer container ca4bc79829eaf8742cf4123125ddca0fede36e7ce68c66f6ddf37a0c6a106486. May 27 03:33:29.335829 containerd[1582]: time="2025-05-27T03:33:29.335791148Z" level=info msg="StartContainer for \"ca4bc79829eaf8742cf4123125ddca0fede36e7ce68c66f6ddf37a0c6a106486\" returns successfully" May 27 03:33:29.343406 systemd[1]: cri-containerd-ca4bc79829eaf8742cf4123125ddca0fede36e7ce68c66f6ddf37a0c6a106486.scope: Deactivated successfully. May 27 03:33:29.345688 containerd[1582]: time="2025-05-27T03:33:29.345527121Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca4bc79829eaf8742cf4123125ddca0fede36e7ce68c66f6ddf37a0c6a106486\" id:\"ca4bc79829eaf8742cf4123125ddca0fede36e7ce68c66f6ddf37a0c6a106486\" pid:3323 exited_at:{seconds:1748316809 nanos:344782823}" May 27 03:33:29.345946 containerd[1582]: time="2025-05-27T03:33:29.345746425Z" level=info msg="received exit event container_id:\"ca4bc79829eaf8742cf4123125ddca0fede36e7ce68c66f6ddf37a0c6a106486\" id:\"ca4bc79829eaf8742cf4123125ddca0fede36e7ce68c66f6ddf37a0c6a106486\" pid:3323 exited_at:{seconds:1748316809 nanos:344782823}" May 27 03:33:29.367360 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca4bc79829eaf8742cf4123125ddca0fede36e7ce68c66f6ddf37a0c6a106486-rootfs.mount: Deactivated successfully. May 27 03:33:29.445971 kubelet[2702]: E0527 03:33:29.445850 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xkksp" podUID="3c1deac8-af48-4170-a1b6-cf33ec7da6f0" May 27 03:33:31.446536 kubelet[2702]: E0527 03:33:31.446468 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xkksp" podUID="3c1deac8-af48-4170-a1b6-cf33ec7da6f0" May 27 03:33:32.455363 containerd[1582]: time="2025-05-27T03:33:32.455315458Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:33:32.456107 containerd[1582]: time="2025-05-27T03:33:32.456077537Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=33665828" May 27 03:33:32.457236 containerd[1582]: time="2025-05-27T03:33:32.457206689Z" level=info msg="ImageCreate event name:\"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:33:32.459283 containerd[1582]: time="2025-05-27T03:33:32.459230862Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:33:32.459811 containerd[1582]: time="2025-05-27T03:33:32.459758168Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"35158523\" in 3.211900917s" May 27 03:33:32.459811 containerd[1582]: time="2025-05-27T03:33:32.459797122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\"" May 27 03:33:32.460726 containerd[1582]: time="2025-05-27T03:33:32.460698083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 27 03:33:32.470319 containerd[1582]: time="2025-05-27T03:33:32.469468577Z" level=info msg="CreateContainer within sandbox \"305afd762bafff27252505b640ae61ce5f25beec7c467810f73e010b0dc47662\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 27 03:33:32.478039 containerd[1582]: time="2025-05-27T03:33:32.477995159Z" level=info msg="Container f72ed9ee923eb34eb4fa4e024e46c8c91c85c4b5ce11c6f74d6b58f3562dcd5f: CDI devices from CRI Config.CDIDevices: []" May 27 03:33:32.486136 containerd[1582]: time="2025-05-27T03:33:32.486086239Z" level=info msg="CreateContainer within sandbox \"305afd762bafff27252505b640ae61ce5f25beec7c467810f73e010b0dc47662\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f72ed9ee923eb34eb4fa4e024e46c8c91c85c4b5ce11c6f74d6b58f3562dcd5f\"" May 27 03:33:32.486648 containerd[1582]: time="2025-05-27T03:33:32.486563681Z" level=info msg="StartContainer for \"f72ed9ee923eb34eb4fa4e024e46c8c91c85c4b5ce11c6f74d6b58f3562dcd5f\"" May 27 03:33:32.487631 containerd[1582]: time="2025-05-27T03:33:32.487597795Z" level=info msg="connecting to shim f72ed9ee923eb34eb4fa4e024e46c8c91c85c4b5ce11c6f74d6b58f3562dcd5f" address="unix:///run/containerd/s/6685c6cc69360bfd5bbba9a37930c4913ec3c1b8533b1a4747e0b45300fbd696" protocol=ttrpc version=3 May 27 03:33:32.516778 systemd[1]: Started cri-containerd-f72ed9ee923eb34eb4fa4e024e46c8c91c85c4b5ce11c6f74d6b58f3562dcd5f.scope - libcontainer container f72ed9ee923eb34eb4fa4e024e46c8c91c85c4b5ce11c6f74d6b58f3562dcd5f. May 27 03:33:32.563688 containerd[1582]: time="2025-05-27T03:33:32.563648368Z" level=info msg="StartContainer for \"f72ed9ee923eb34eb4fa4e024e46c8c91c85c4b5ce11c6f74d6b58f3562dcd5f\" returns successfully" May 27 03:33:33.445858 kubelet[2702]: E0527 03:33:33.445793 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xkksp" podUID="3c1deac8-af48-4170-a1b6-cf33ec7da6f0" May 27 03:33:33.500056 kubelet[2702]: E0527 03:33:33.500031 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:33:33.509192 kubelet[2702]: I0527 03:33:33.509089 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-858f4d4866-w82sc" podStartSLOduration=2.759806341 podStartE2EDuration="7.509074542s" podCreationTimestamp="2025-05-27 03:33:26 +0000 UTC" firstStartedPulling="2025-05-27 03:33:27.711199517 +0000 UTC m=+19.346299377" lastFinishedPulling="2025-05-27 03:33:32.460467728 +0000 UTC m=+24.095567578" observedRunningTime="2025-05-27 03:33:33.508696448 +0000 UTC m=+25.143796298" watchObservedRunningTime="2025-05-27 03:33:33.509074542 +0000 UTC m=+25.144174402" May 27 03:33:34.501050 kubelet[2702]: I0527 03:33:34.501011 2702 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:33:34.501424 kubelet[2702]: E0527 03:33:34.501296 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:33:35.445781 kubelet[2702]: E0527 03:33:35.445740 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xkksp" podUID="3c1deac8-af48-4170-a1b6-cf33ec7da6f0" May 27 03:33:37.367171 containerd[1582]: time="2025-05-27T03:33:37.367107168Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:33:37.368633 containerd[1582]: time="2025-05-27T03:33:37.368458717Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=70300568" May 27 03:33:37.371200 containerd[1582]: time="2025-05-27T03:33:37.371139901Z" level=info msg="ImageCreate event name:\"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:33:37.373694 containerd[1582]: time="2025-05-27T03:33:37.373652386Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:33:37.374326 containerd[1582]: time="2025-05-27T03:33:37.374295769Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"71793271\" in 4.913482548s" May 27 03:33:37.374387 containerd[1582]: time="2025-05-27T03:33:37.374329213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\"" May 27 03:33:37.376332 containerd[1582]: time="2025-05-27T03:33:37.376287324Z" level=info msg="CreateContainer within sandbox \"8a7219cf3c654634408c54408297d5d749ea3cbec4bbec8513046acc611c5d91\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 27 03:33:37.385753 containerd[1582]: time="2025-05-27T03:33:37.385710075Z" level=info msg="Container 127e43c8159639c2727b852809889ce3248006ad6a9e210d08161eccd3eef906: CDI devices from CRI Config.CDIDevices: []" May 27 03:33:37.394686 containerd[1582]: time="2025-05-27T03:33:37.394649436Z" level=info msg="CreateContainer within sandbox \"8a7219cf3c654634408c54408297d5d749ea3cbec4bbec8513046acc611c5d91\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"127e43c8159639c2727b852809889ce3248006ad6a9e210d08161eccd3eef906\"" May 27 03:33:37.395121 containerd[1582]: time="2025-05-27T03:33:37.395092351Z" level=info msg="StartContainer for \"127e43c8159639c2727b852809889ce3248006ad6a9e210d08161eccd3eef906\"" May 27 03:33:37.396344 containerd[1582]: time="2025-05-27T03:33:37.396304616Z" level=info msg="connecting to shim 127e43c8159639c2727b852809889ce3248006ad6a9e210d08161eccd3eef906" address="unix:///run/containerd/s/b58bf23adadf25bbbdc7fd277ced7b14fa770b19e5c1db37201ea5205e676c18" protocol=ttrpc version=3 May 27 03:33:37.415735 systemd[1]: Started cri-containerd-127e43c8159639c2727b852809889ce3248006ad6a9e210d08161eccd3eef906.scope - libcontainer container 127e43c8159639c2727b852809889ce3248006ad6a9e210d08161eccd3eef906. May 27 03:33:37.446108 kubelet[2702]: E0527 03:33:37.446061 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xkksp" podUID="3c1deac8-af48-4170-a1b6-cf33ec7da6f0" May 27 03:33:37.771566 containerd[1582]: time="2025-05-27T03:33:37.771414327Z" level=info msg="StartContainer for \"127e43c8159639c2727b852809889ce3248006ad6a9e210d08161eccd3eef906\" returns successfully" May 27 03:33:39.073277 containerd[1582]: time="2025-05-27T03:33:39.073189379Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 03:33:39.076398 systemd[1]: cri-containerd-127e43c8159639c2727b852809889ce3248006ad6a9e210d08161eccd3eef906.scope: Deactivated successfully. May 27 03:33:39.076758 systemd[1]: cri-containerd-127e43c8159639c2727b852809889ce3248006ad6a9e210d08161eccd3eef906.scope: Consumed 588ms CPU time, 179.6M memory peak, 8K read from disk, 170.9M written to disk. May 27 03:33:39.078393 containerd[1582]: time="2025-05-27T03:33:39.078359498Z" level=info msg="received exit event container_id:\"127e43c8159639c2727b852809889ce3248006ad6a9e210d08161eccd3eef906\" id:\"127e43c8159639c2727b852809889ce3248006ad6a9e210d08161eccd3eef906\" pid:3426 exited_at:{seconds:1748316819 nanos:78115448}" May 27 03:33:39.078496 containerd[1582]: time="2025-05-27T03:33:39.078447134Z" level=info msg="TaskExit event in podsandbox handler container_id:\"127e43c8159639c2727b852809889ce3248006ad6a9e210d08161eccd3eef906\" id:\"127e43c8159639c2727b852809889ce3248006ad6a9e210d08161eccd3eef906\" pid:3426 exited_at:{seconds:1748316819 nanos:78115448}" May 27 03:33:39.100388 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-127e43c8159639c2727b852809889ce3248006ad6a9e210d08161eccd3eef906-rootfs.mount: Deactivated successfully. May 27 03:33:39.170204 kubelet[2702]: I0527 03:33:39.170164 2702 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 27 03:33:39.242863 systemd[1]: Created slice kubepods-burstable-pod5b3df184_106d_4a00_9323_ce4ef282b010.slice - libcontainer container kubepods-burstable-pod5b3df184_106d_4a00_9323_ce4ef282b010.slice. May 27 03:33:39.246655 kubelet[2702]: I0527 03:33:39.246602 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b3df184-106d-4a00-9323-ce4ef282b010-config-volume\") pod \"coredns-668d6bf9bc-sdsr9\" (UID: \"5b3df184-106d-4a00-9323-ce4ef282b010\") " pod="kube-system/coredns-668d6bf9bc-sdsr9" May 27 03:33:39.246735 kubelet[2702]: I0527 03:33:39.246668 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjgj7\" (UniqueName: \"kubernetes.io/projected/5b3df184-106d-4a00-9323-ce4ef282b010-kube-api-access-gjgj7\") pod \"coredns-668d6bf9bc-sdsr9\" (UID: \"5b3df184-106d-4a00-9323-ce4ef282b010\") " pod="kube-system/coredns-668d6bf9bc-sdsr9" May 27 03:33:39.249573 systemd[1]: Created slice kubepods-besteffort-podf3cdcef0_8daf_4680_9267_582dfa8e22eb.slice - libcontainer container kubepods-besteffort-podf3cdcef0_8daf_4680_9267_582dfa8e22eb.slice. May 27 03:33:39.255629 systemd[1]: Created slice kubepods-besteffort-podf4b9f017_8bb5_4cec_a91d_73aeff0fac1b.slice - libcontainer container kubepods-besteffort-podf4b9f017_8bb5_4cec_a91d_73aeff0fac1b.slice. May 27 03:33:39.318065 systemd[1]: Created slice kubepods-besteffort-podec001429_c4e3_432b_9155_d44bc397d9ca.slice - libcontainer container kubepods-besteffort-podec001429_c4e3_432b_9155_d44bc397d9ca.slice. May 27 03:33:39.323811 systemd[1]: Created slice kubepods-besteffort-pod89747668_1d74_49e4_b34b_55cf8d01980a.slice - libcontainer container kubepods-besteffort-pod89747668_1d74_49e4_b34b_55cf8d01980a.slice. May 27 03:33:39.327926 systemd[1]: Created slice kubepods-besteffort-podca6521c6_d3ea_4dcf_ba8f_c45d32999754.slice - libcontainer container kubepods-besteffort-podca6521c6_d3ea_4dcf_ba8f_c45d32999754.slice. May 27 03:33:39.334028 systemd[1]: Created slice kubepods-burstable-pod1bda2375_27ab_4c61_8b67_2615788d423b.slice - libcontainer container kubepods-burstable-pod1bda2375_27ab_4c61_8b67_2615788d423b.slice. May 27 03:33:39.347204 kubelet[2702]: I0527 03:33:39.347170 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62snn\" (UniqueName: \"kubernetes.io/projected/f3cdcef0-8daf-4680-9267-582dfa8e22eb-kube-api-access-62snn\") pod \"calico-kube-controllers-7b65b8b67-vrhgb\" (UID: \"f3cdcef0-8daf-4680-9267-582dfa8e22eb\") " pod="calico-system/calico-kube-controllers-7b65b8b67-vrhgb" May 27 03:33:39.347271 kubelet[2702]: I0527 03:33:39.347213 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89747668-1d74-49e4-b34b-55cf8d01980a-goldmane-ca-bundle\") pod \"goldmane-78d55f7ddc-kszft\" (UID: \"89747668-1d74-49e4-b34b-55cf8d01980a\") " pod="calico-system/goldmane-78d55f7ddc-kszft" May 27 03:33:39.347271 kubelet[2702]: I0527 03:33:39.347235 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdggf\" (UniqueName: \"kubernetes.io/projected/ec001429-c4e3-432b-9155-d44bc397d9ca-kube-api-access-kdggf\") pod \"calico-apiserver-6bf56db757-9rkq6\" (UID: \"ec001429-c4e3-432b-9155-d44bc397d9ca\") " pod="calico-apiserver/calico-apiserver-6bf56db757-9rkq6" May 27 03:33:39.347325 kubelet[2702]: I0527 03:33:39.347268 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ca6521c6-d3ea-4dcf-ba8f-c45d32999754-whisker-backend-key-pair\") pod \"whisker-6d4b7d48f7-p64tj\" (UID: \"ca6521c6-d3ea-4dcf-ba8f-c45d32999754\") " pod="calico-system/whisker-6d4b7d48f7-p64tj" May 27 03:33:39.347325 kubelet[2702]: I0527 03:33:39.347295 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1bda2375-27ab-4c61-8b67-2615788d423b-config-volume\") pod \"coredns-668d6bf9bc-2vc8v\" (UID: \"1bda2375-27ab-4c61-8b67-2615788d423b\") " pod="kube-system/coredns-668d6bf9bc-2vc8v" May 27 03:33:39.347401 kubelet[2702]: I0527 03:33:39.347370 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwszz\" (UniqueName: \"kubernetes.io/projected/89747668-1d74-49e4-b34b-55cf8d01980a-kube-api-access-vwszz\") pod \"goldmane-78d55f7ddc-kszft\" (UID: \"89747668-1d74-49e4-b34b-55cf8d01980a\") " pod="calico-system/goldmane-78d55f7ddc-kszft" May 27 03:33:39.347445 kubelet[2702]: I0527 03:33:39.347428 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89747668-1d74-49e4-b34b-55cf8d01980a-config\") pod \"goldmane-78d55f7ddc-kszft\" (UID: \"89747668-1d74-49e4-b34b-55cf8d01980a\") " pod="calico-system/goldmane-78d55f7ddc-kszft" May 27 03:33:39.347473 kubelet[2702]: I0527 03:33:39.347449 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxvjt\" (UniqueName: \"kubernetes.io/projected/ca6521c6-d3ea-4dcf-ba8f-c45d32999754-kube-api-access-gxvjt\") pod \"whisker-6d4b7d48f7-p64tj\" (UID: \"ca6521c6-d3ea-4dcf-ba8f-c45d32999754\") " pod="calico-system/whisker-6d4b7d48f7-p64tj" May 27 03:33:39.347473 kubelet[2702]: I0527 03:33:39.347464 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f4b9f017-8bb5-4cec-a91d-73aeff0fac1b-calico-apiserver-certs\") pod \"calico-apiserver-6bf56db757-xknc5\" (UID: \"f4b9f017-8bb5-4cec-a91d-73aeff0fac1b\") " pod="calico-apiserver/calico-apiserver-6bf56db757-xknc5" May 27 03:33:39.347521 kubelet[2702]: I0527 03:33:39.347478 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n5vk\" (UniqueName: \"kubernetes.io/projected/f4b9f017-8bb5-4cec-a91d-73aeff0fac1b-kube-api-access-7n5vk\") pod \"calico-apiserver-6bf56db757-xknc5\" (UID: \"f4b9f017-8bb5-4cec-a91d-73aeff0fac1b\") " pod="calico-apiserver/calico-apiserver-6bf56db757-xknc5" May 27 03:33:39.347521 kubelet[2702]: I0527 03:33:39.347500 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3cdcef0-8daf-4680-9267-582dfa8e22eb-tigera-ca-bundle\") pod \"calico-kube-controllers-7b65b8b67-vrhgb\" (UID: \"f3cdcef0-8daf-4680-9267-582dfa8e22eb\") " pod="calico-system/calico-kube-controllers-7b65b8b67-vrhgb" May 27 03:33:39.347521 kubelet[2702]: I0527 03:33:39.347515 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xxfv\" (UniqueName: \"kubernetes.io/projected/1bda2375-27ab-4c61-8b67-2615788d423b-kube-api-access-2xxfv\") pod \"coredns-668d6bf9bc-2vc8v\" (UID: \"1bda2375-27ab-4c61-8b67-2615788d423b\") " pod="kube-system/coredns-668d6bf9bc-2vc8v" May 27 03:33:39.347592 kubelet[2702]: I0527 03:33:39.347531 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca6521c6-d3ea-4dcf-ba8f-c45d32999754-whisker-ca-bundle\") pod \"whisker-6d4b7d48f7-p64tj\" (UID: \"ca6521c6-d3ea-4dcf-ba8f-c45d32999754\") " pod="calico-system/whisker-6d4b7d48f7-p64tj" May 27 03:33:39.347658 kubelet[2702]: I0527 03:33:39.347587 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ec001429-c4e3-432b-9155-d44bc397d9ca-calico-apiserver-certs\") pod \"calico-apiserver-6bf56db757-9rkq6\" (UID: \"ec001429-c4e3-432b-9155-d44bc397d9ca\") " pod="calico-apiserver/calico-apiserver-6bf56db757-9rkq6" May 27 03:33:39.347689 kubelet[2702]: I0527 03:33:39.347658 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/89747668-1d74-49e4-b34b-55cf8d01980a-goldmane-key-pair\") pod \"goldmane-78d55f7ddc-kszft\" (UID: \"89747668-1d74-49e4-b34b-55cf8d01980a\") " pod="calico-system/goldmane-78d55f7ddc-kszft" May 27 03:33:39.480544 systemd[1]: Created slice kubepods-besteffort-pod3c1deac8_af48_4170_a1b6_cf33ec7da6f0.slice - libcontainer container kubepods-besteffort-pod3c1deac8_af48_4170_a1b6_cf33ec7da6f0.slice. May 27 03:33:39.482807 containerd[1582]: time="2025-05-27T03:33:39.482763865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xkksp,Uid:3c1deac8-af48-4170-a1b6-cf33ec7da6f0,Namespace:calico-system,Attempt:0,}" May 27 03:33:39.543857 containerd[1582]: time="2025-05-27T03:33:39.543804959Z" level=error msg="Failed to destroy network for sandbox \"3900237cdd25e8ac3b70c685acf11ef50a17a63d66a658a282d23b74343a8246\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:33:39.545160 containerd[1582]: time="2025-05-27T03:33:39.545115306Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xkksp,Uid:3c1deac8-af48-4170-a1b6-cf33ec7da6f0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3900237cdd25e8ac3b70c685acf11ef50a17a63d66a658a282d23b74343a8246\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:33:39.545381 kubelet[2702]: E0527 03:33:39.545329 2702 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3900237cdd25e8ac3b70c685acf11ef50a17a63d66a658a282d23b74343a8246\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:33:39.545442 kubelet[2702]: E0527 03:33:39.545410 2702 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3900237cdd25e8ac3b70c685acf11ef50a17a63d66a658a282d23b74343a8246\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xkksp" May 27 03:33:39.545442 kubelet[2702]: E0527 03:33:39.545430 2702 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3900237cdd25e8ac3b70c685acf11ef50a17a63d66a658a282d23b74343a8246\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xkksp" May 27 03:33:39.545499 kubelet[2702]: E0527 03:33:39.545473 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xkksp_calico-system(3c1deac8-af48-4170-a1b6-cf33ec7da6f0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xkksp_calico-system(3c1deac8-af48-4170-a1b6-cf33ec7da6f0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3900237cdd25e8ac3b70c685acf11ef50a17a63d66a658a282d23b74343a8246\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xkksp" podUID="3c1deac8-af48-4170-a1b6-cf33ec7da6f0" May 27 03:33:39.546592 kubelet[2702]: E0527 03:33:39.546558 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:33:39.547102 containerd[1582]: time="2025-05-27T03:33:39.547061623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sdsr9,Uid:5b3df184-106d-4a00-9323-ce4ef282b010,Namespace:kube-system,Attempt:0,}" May 27 03:33:39.553276 containerd[1582]: time="2025-05-27T03:33:39.553244812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b65b8b67-vrhgb,Uid:f3cdcef0-8daf-4680-9267-582dfa8e22eb,Namespace:calico-system,Attempt:0,}" May 27 03:33:39.612512 containerd[1582]: time="2025-05-27T03:33:39.612395574Z" level=error msg="Failed to destroy network for sandbox \"b9c8c2f9e248a3d6aaa8be7e17a8f8d5270b35c0384012079b6ec8f1aac479ad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:33:39.614298 containerd[1582]: time="2025-05-27T03:33:39.614255859Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sdsr9,Uid:5b3df184-106d-4a00-9323-ce4ef282b010,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9c8c2f9e248a3d6aaa8be7e17a8f8d5270b35c0384012079b6ec8f1aac479ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:33:39.614635 kubelet[2702]: E0527 03:33:39.614552 2702 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9c8c2f9e248a3d6aaa8be7e17a8f8d5270b35c0384012079b6ec8f1aac479ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:33:39.614697 kubelet[2702]: E0527 03:33:39.614667 2702 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9c8c2f9e248a3d6aaa8be7e17a8f8d5270b35c0384012079b6ec8f1aac479ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-sdsr9" May 27 03:33:39.614697 kubelet[2702]: E0527 03:33:39.614689 2702 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9c8c2f9e248a3d6aaa8be7e17a8f8d5270b35c0384012079b6ec8f1aac479ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-sdsr9" May 27 03:33:39.614824 kubelet[2702]: E0527 03:33:39.614762 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-sdsr9_kube-system(5b3df184-106d-4a00-9323-ce4ef282b010)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-sdsr9_kube-system(5b3df184-106d-4a00-9323-ce4ef282b010)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b9c8c2f9e248a3d6aaa8be7e17a8f8d5270b35c0384012079b6ec8f1aac479ad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-sdsr9" podUID="5b3df184-106d-4a00-9323-ce4ef282b010" May 27 03:33:39.614962 containerd[1582]: time="2025-05-27T03:33:39.614850699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bf56db757-xknc5,Uid:f4b9f017-8bb5-4cec-a91d-73aeff0fac1b,Namespace:calico-apiserver,Attempt:0,}" May 27 03:33:39.622005 containerd[1582]: time="2025-05-27T03:33:39.621971705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bf56db757-9rkq6,Uid:ec001429-c4e3-432b-9155-d44bc397d9ca,Namespace:calico-apiserver,Attempt:0,}" May 27 03:33:39.628100 containerd[1582]: time="2025-05-27T03:33:39.627936591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-kszft,Uid:89747668-1d74-49e4-b34b-55cf8d01980a,Namespace:calico-system,Attempt:0,}" May 27 03:33:39.632011 containerd[1582]: time="2025-05-27T03:33:39.631985440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d4b7d48f7-p64tj,Uid:ca6521c6-d3ea-4dcf-ba8f-c45d32999754,Namespace:calico-system,Attempt:0,}" May 27 03:33:39.637406 kubelet[2702]: E0527 03:33:39.637234 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:33:39.638168 containerd[1582]: time="2025-05-27T03:33:39.638128713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2vc8v,Uid:1bda2375-27ab-4c61-8b67-2615788d423b,Namespace:kube-system,Attempt:0,}" May 27 03:33:39.638351 containerd[1582]: time="2025-05-27T03:33:39.638144162Z" level=error msg="Failed to destroy network for sandbox \"5b1fbbc7af483cba0f0809793e257227d15bfc32414dcafbaa3f140a3c8e74fd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:33:39.658823 containerd[1582]: time="2025-05-27T03:33:39.658776560Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b65b8b67-vrhgb,Uid:f3cdcef0-8daf-4680-9267-582dfa8e22eb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b1fbbc7af483cba0f0809793e257227d15bfc32414dcafbaa3f140a3c8e74fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:33:39.659536 kubelet[2702]: E0527 03:33:39.659465 2702 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b1fbbc7af483cba0f0809793e257227d15bfc32414dcafbaa3f140a3c8e74fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:33:39.659601 kubelet[2702]: E0527 03:33:39.659544 2702 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b1fbbc7af483cba0f0809793e257227d15bfc32414dcafbaa3f140a3c8e74fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b65b8b67-vrhgb" May 27 03:33:39.659601 kubelet[2702]: E0527 03:33:39.659563 2702 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b1fbbc7af483cba0f0809793e257227d15bfc32414dcafbaa3f140a3c8e74fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b65b8b67-vrhgb" May 27 03:33:39.659681 kubelet[2702]: E0527 03:33:39.659623 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7b65b8b67-vrhgb_calico-system(f3cdcef0-8daf-4680-9267-582dfa8e22eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7b65b8b67-vrhgb_calico-system(f3cdcef0-8daf-4680-9267-582dfa8e22eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5b1fbbc7af483cba0f0809793e257227d15bfc32414dcafbaa3f140a3c8e74fd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b65b8b67-vrhgb" podUID="f3cdcef0-8daf-4680-9267-582dfa8e22eb" May 27 03:33:39.686850 containerd[1582]: time="2025-05-27T03:33:39.686719401Z" level=error msg="Failed to destroy network for sandbox \"bbc4d7b1d28fe94a245a3584df77c30a681954af98671109ee208afbfab91343\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:33:39.688433 containerd[1582]: time="2025-05-27T03:33:39.688379107Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bf56db757-xknc5,Uid:f4b9f017-8bb5-4cec-a91d-73aeff0fac1b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbc4d7b1d28fe94a245a3584df77c30a681954af98671109ee208afbfab91343\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:33:39.688679 kubelet[2702]: E0527 03:33:39.688634 2702 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbc4d7b1d28fe94a245a3584df77c30a681954af98671109ee208afbfab91343\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:33:39.688733 kubelet[2702]: E0527 03:33:39.688698 2702 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbc4d7b1d28fe94a245a3584df77c30a681954af98671109ee208afbfab91343\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bf56db757-xknc5" May 27 03:33:39.688758 kubelet[2702]: E0527 03:33:39.688735 2702 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbc4d7b1d28fe94a245a3584df77c30a681954af98671109ee208afbfab91343\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bf56db757-xknc5" May 27 03:33:39.688790 kubelet[2702]: E0527 03:33:39.688774 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6bf56db757-xknc5_calico-apiserver(f4b9f017-8bb5-4cec-a91d-73aeff0fac1b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6bf56db757-xknc5_calico-apiserver(f4b9f017-8bb5-4cec-a91d-73aeff0fac1b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bbc4d7b1d28fe94a245a3584df77c30a681954af98671109ee208afbfab91343\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6bf56db757-xknc5" podUID="f4b9f017-8bb5-4cec-a91d-73aeff0fac1b" May 27 03:33:39.704596 containerd[1582]: time="2025-05-27T03:33:39.704548249Z" level=error msg="Failed to destroy network for sandbox \"a8b0d70a0e11947d1951bca9a0a183cb52fa00d1f2f85aba8f9ab968a8bd11ea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:33:39.726363 containerd[1582]: time="2025-05-27T03:33:39.707449684Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bf56db757-9rkq6,Uid:ec001429-c4e3-432b-9155-d44bc397d9ca,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8b0d70a0e11947d1951bca9a0a183cb52fa00d1f2f85aba8f9ab968a8bd11ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:33:39.726553 containerd[1582]: time="2025-05-27T03:33:39.717505067Z" level=error msg="Failed to destroy network for sandbox \"2b6199bf9ebadff17aec319d5d079a53267fb2193992a97c95dca655ead0a8f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:33:39.727005 kubelet[2702]: E0527 03:33:39.726966 2702 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8b0d70a0e11947d1951bca9a0a183cb52fa00d1f2f85aba8f9ab968a8bd11ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:33:39.727070 kubelet[2702]: E0527 03:33:39.727026 2702 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8b0d70a0e11947d1951bca9a0a183cb52fa00d1f2f85aba8f9ab968a8bd11ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bf56db757-9rkq6" May 27 03:33:39.727070 kubelet[2702]: E0527 03:33:39.727048 2702 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8b0d70a0e11947d1951bca9a0a183cb52fa00d1f2f85aba8f9ab968a8bd11ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bf56db757-9rkq6" May 27 03:33:39.727119 kubelet[2702]: E0527 03:33:39.727083 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6bf56db757-9rkq6_calico-apiserver(ec001429-c4e3-432b-9155-d44bc397d9ca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6bf56db757-9rkq6_calico-apiserver(ec001429-c4e3-432b-9155-d44bc397d9ca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a8b0d70a0e11947d1951bca9a0a183cb52fa00d1f2f85aba8f9ab968a8bd11ea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6bf56db757-9rkq6" podUID="ec001429-c4e3-432b-9155-d44bc397d9ca" May 27 03:33:39.728016 containerd[1582]: time="2025-05-27T03:33:39.727976634Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-kszft,Uid:89747668-1d74-49e4-b34b-55cf8d01980a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b6199bf9ebadff17aec319d5d079a53267fb2193992a97c95dca655ead0a8f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:33:39.728197 kubelet[2702]: E0527 03:33:39.728167 2702 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b6199bf9ebadff17aec319d5d079a53267fb2193992a97c95dca655ead0a8f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:33:39.728244 kubelet[2702]: E0527 03:33:39.728202 2702 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b6199bf9ebadff17aec319d5d079a53267fb2193992a97c95dca655ead0a8f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-kszft" May 27 03:33:39.728244 kubelet[2702]: E0527 03:33:39.728218 2702 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b6199bf9ebadff17aec319d5d079a53267fb2193992a97c95dca655ead0a8f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-kszft" May 27 03:33:39.728324 kubelet[2702]: E0527 03:33:39.728243 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-78d55f7ddc-kszft_calico-system(89747668-1d74-49e4-b34b-55cf8d01980a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-78d55f7ddc-kszft_calico-system(89747668-1d74-49e4-b34b-55cf8d01980a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2b6199bf9ebadff17aec319d5d079a53267fb2193992a97c95dca655ead0a8f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-kszft" podUID="89747668-1d74-49e4-b34b-55cf8d01980a" May 27 03:33:39.728413 containerd[1582]: time="2025-05-27T03:33:39.728352813Z" level=error msg="Failed to destroy network for sandbox \"309beee7fee7965f2cd58617e343593f7effb0efd97cb26720b6851acfaa34d9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:33:39.729700 containerd[1582]: time="2025-05-27T03:33:39.729672179Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d4b7d48f7-p64tj,Uid:ca6521c6-d3ea-4dcf-ba8f-c45d32999754,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"309beee7fee7965f2cd58617e343593f7effb0efd97cb26720b6851acfaa34d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:33:39.729852 kubelet[2702]: E0527 03:33:39.729813 2702 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"309beee7fee7965f2cd58617e343593f7effb0efd97cb26720b6851acfaa34d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:33:39.729892 kubelet[2702]: E0527 03:33:39.729876 2702 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"309beee7fee7965f2cd58617e343593f7effb0efd97cb26720b6851acfaa34d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d4b7d48f7-p64tj" May 27 03:33:39.729925 kubelet[2702]: E0527 03:33:39.729897 2702 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"309beee7fee7965f2cd58617e343593f7effb0efd97cb26720b6851acfaa34d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d4b7d48f7-p64tj" May 27 03:33:39.729967 kubelet[2702]: E0527 03:33:39.729941 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6d4b7d48f7-p64tj_calico-system(ca6521c6-d3ea-4dcf-ba8f-c45d32999754)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6d4b7d48f7-p64tj_calico-system(ca6521c6-d3ea-4dcf-ba8f-c45d32999754)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"309beee7fee7965f2cd58617e343593f7effb0efd97cb26720b6851acfaa34d9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6d4b7d48f7-p64tj" podUID="ca6521c6-d3ea-4dcf-ba8f-c45d32999754" May 27 03:33:39.739806 containerd[1582]: time="2025-05-27T03:33:39.739750214Z" level=error msg="Failed to destroy network for sandbox \"fbf4fa5e63ff68d731cee97d473e60b4b3e6d9e09ca648c0734f83d83a598d0e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:33:39.740905 containerd[1582]: time="2025-05-27T03:33:39.740872558Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2vc8v,Uid:1bda2375-27ab-4c61-8b67-2615788d423b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbf4fa5e63ff68d731cee97d473e60b4b3e6d9e09ca648c0734f83d83a598d0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:33:39.741127 kubelet[2702]: E0527 03:33:39.741094 2702 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbf4fa5e63ff68d731cee97d473e60b4b3e6d9e09ca648c0734f83d83a598d0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:33:39.741164 kubelet[2702]: E0527 03:33:39.741141 2702 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbf4fa5e63ff68d731cee97d473e60b4b3e6d9e09ca648c0734f83d83a598d0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2vc8v" May 27 03:33:39.741164 kubelet[2702]: E0527 03:33:39.741157 2702 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbf4fa5e63ff68d731cee97d473e60b4b3e6d9e09ca648c0734f83d83a598d0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2vc8v" May 27 03:33:39.741281 kubelet[2702]: E0527 03:33:39.741192 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-2vc8v_kube-system(1bda2375-27ab-4c61-8b67-2615788d423b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-2vc8v_kube-system(1bda2375-27ab-4c61-8b67-2615788d423b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fbf4fa5e63ff68d731cee97d473e60b4b3e6d9e09ca648c0734f83d83a598d0e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-2vc8v" podUID="1bda2375-27ab-4c61-8b67-2615788d423b" May 27 03:33:39.781700 containerd[1582]: time="2025-05-27T03:33:39.781509514Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 27 03:33:41.964736 kubelet[2702]: I0527 03:33:41.964687 2702 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:33:41.965187 kubelet[2702]: E0527 03:33:41.965158 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:33:42.785896 kubelet[2702]: E0527 03:33:42.785864 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:33:44.659813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3074578549.mount: Deactivated successfully. May 27 03:33:46.072644 containerd[1582]: time="2025-05-27T03:33:46.072566578Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:33:46.073858 containerd[1582]: time="2025-05-27T03:33:46.073829343Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=156396372" May 27 03:33:46.075244 containerd[1582]: time="2025-05-27T03:33:46.075189502Z" level=info msg="ImageCreate event name:\"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:33:46.077379 containerd[1582]: time="2025-05-27T03:33:46.077335828Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:33:46.077854 containerd[1582]: time="2025-05-27T03:33:46.077820740Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"156396234\" in 6.296275118s" May 27 03:33:46.077896 containerd[1582]: time="2025-05-27T03:33:46.077859103Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\"" May 27 03:33:46.085818 containerd[1582]: time="2025-05-27T03:33:46.085779510Z" level=info msg="CreateContainer within sandbox \"8a7219cf3c654634408c54408297d5d749ea3cbec4bbec8513046acc611c5d91\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 27 03:33:46.096023 containerd[1582]: time="2025-05-27T03:33:46.095987611Z" level=info msg="Container 28736f24428bcf1f7028055e51dd1e6071e536d4a6a9abc7d6781c84f5dc3ece: CDI devices from CRI Config.CDIDevices: []" May 27 03:33:46.107187 containerd[1582]: time="2025-05-27T03:33:46.107145898Z" level=info msg="CreateContainer within sandbox \"8a7219cf3c654634408c54408297d5d749ea3cbec4bbec8513046acc611c5d91\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"28736f24428bcf1f7028055e51dd1e6071e536d4a6a9abc7d6781c84f5dc3ece\"" May 27 03:33:46.107824 containerd[1582]: time="2025-05-27T03:33:46.107751066Z" level=info msg="StartContainer for \"28736f24428bcf1f7028055e51dd1e6071e536d4a6a9abc7d6781c84f5dc3ece\"" May 27 03:33:46.109304 containerd[1582]: time="2025-05-27T03:33:46.109268320Z" level=info msg="connecting to shim 28736f24428bcf1f7028055e51dd1e6071e536d4a6a9abc7d6781c84f5dc3ece" address="unix:///run/containerd/s/b58bf23adadf25bbbdc7fd277ced7b14fa770b19e5c1db37201ea5205e676c18" protocol=ttrpc version=3 May 27 03:33:46.176765 systemd[1]: Started cri-containerd-28736f24428bcf1f7028055e51dd1e6071e536d4a6a9abc7d6781c84f5dc3ece.scope - libcontainer container 28736f24428bcf1f7028055e51dd1e6071e536d4a6a9abc7d6781c84f5dc3ece. May 27 03:33:46.228954 containerd[1582]: time="2025-05-27T03:33:46.228909814Z" level=info msg="StartContainer for \"28736f24428bcf1f7028055e51dd1e6071e536d4a6a9abc7d6781c84f5dc3ece\" returns successfully" May 27 03:33:46.298985 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 27 03:33:46.299826 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 27 03:33:46.391147 kubelet[2702]: I0527 03:33:46.391011 2702 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxvjt\" (UniqueName: \"kubernetes.io/projected/ca6521c6-d3ea-4dcf-ba8f-c45d32999754-kube-api-access-gxvjt\") pod \"ca6521c6-d3ea-4dcf-ba8f-c45d32999754\" (UID: \"ca6521c6-d3ea-4dcf-ba8f-c45d32999754\") " May 27 03:33:46.391147 kubelet[2702]: I0527 03:33:46.391047 2702 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ca6521c6-d3ea-4dcf-ba8f-c45d32999754-whisker-backend-key-pair\") pod \"ca6521c6-d3ea-4dcf-ba8f-c45d32999754\" (UID: \"ca6521c6-d3ea-4dcf-ba8f-c45d32999754\") " May 27 03:33:46.391147 kubelet[2702]: I0527 03:33:46.391064 2702 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca6521c6-d3ea-4dcf-ba8f-c45d32999754-whisker-ca-bundle\") pod \"ca6521c6-d3ea-4dcf-ba8f-c45d32999754\" (UID: \"ca6521c6-d3ea-4dcf-ba8f-c45d32999754\") " May 27 03:33:46.392200 kubelet[2702]: I0527 03:33:46.391474 2702 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca6521c6-d3ea-4dcf-ba8f-c45d32999754-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "ca6521c6-d3ea-4dcf-ba8f-c45d32999754" (UID: "ca6521c6-d3ea-4dcf-ba8f-c45d32999754"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 27 03:33:46.395261 kubelet[2702]: I0527 03:33:46.395160 2702 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca6521c6-d3ea-4dcf-ba8f-c45d32999754-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "ca6521c6-d3ea-4dcf-ba8f-c45d32999754" (UID: "ca6521c6-d3ea-4dcf-ba8f-c45d32999754"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 27 03:33:46.396346 kubelet[2702]: I0527 03:33:46.396308 2702 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca6521c6-d3ea-4dcf-ba8f-c45d32999754-kube-api-access-gxvjt" (OuterVolumeSpecName: "kube-api-access-gxvjt") pod "ca6521c6-d3ea-4dcf-ba8f-c45d32999754" (UID: "ca6521c6-d3ea-4dcf-ba8f-c45d32999754"). InnerVolumeSpecName "kube-api-access-gxvjt". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 03:33:46.453462 systemd[1]: Removed slice kubepods-besteffort-podca6521c6_d3ea_4dcf_ba8f_c45d32999754.slice - libcontainer container kubepods-besteffort-podca6521c6_d3ea_4dcf_ba8f_c45d32999754.slice. May 27 03:33:46.492094 kubelet[2702]: I0527 03:33:46.492052 2702 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ca6521c6-d3ea-4dcf-ba8f-c45d32999754-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" May 27 03:33:46.492094 kubelet[2702]: I0527 03:33:46.492086 2702 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca6521c6-d3ea-4dcf-ba8f-c45d32999754-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" May 27 03:33:46.492094 kubelet[2702]: I0527 03:33:46.492098 2702 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gxvjt\" (UniqueName: \"kubernetes.io/projected/ca6521c6-d3ea-4dcf-ba8f-c45d32999754-kube-api-access-gxvjt\") on node \"localhost\" DevicePath \"\"" May 27 03:33:47.083126 systemd[1]: var-lib-kubelet-pods-ca6521c6\x2dd3ea\x2d4dcf\x2dba8f\x2dc45d32999754-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgxvjt.mount: Deactivated successfully. May 27 03:33:47.083238 systemd[1]: var-lib-kubelet-pods-ca6521c6\x2dd3ea\x2d4dcf\x2dba8f\x2dc45d32999754-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 27 03:33:47.124938 kubelet[2702]: I0527 03:33:47.124345 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8lrxr" podStartSLOduration=1.751907809 podStartE2EDuration="20.124319578s" podCreationTimestamp="2025-05-27 03:33:27 +0000 UTC" firstStartedPulling="2025-05-27 03:33:27.70605199 +0000 UTC m=+19.341151851" lastFinishedPulling="2025-05-27 03:33:46.07846376 +0000 UTC m=+37.713563620" observedRunningTime="2025-05-27 03:33:47.12357601 +0000 UTC m=+38.758675860" watchObservedRunningTime="2025-05-27 03:33:47.124319578 +0000 UTC m=+38.759419428" May 27 03:33:47.175787 containerd[1582]: time="2025-05-27T03:33:47.175740088Z" level=info msg="TaskExit event in podsandbox handler container_id:\"28736f24428bcf1f7028055e51dd1e6071e536d4a6a9abc7d6781c84f5dc3ece\" id:\"f6a69f12bcdff4be651a331303c3ec7681290ca0ef868e72bc83a25044c42019\" pid:3817 exit_status:1 exited_at:{seconds:1748316827 nanos:175139449}" May 27 03:33:47.187381 systemd[1]: Created slice kubepods-besteffort-pod3ede4255_edc2_43ea_a2e0_613a76641b48.slice - libcontainer container kubepods-besteffort-pod3ede4255_edc2_43ea_a2e0_613a76641b48.slice. May 27 03:33:47.197457 kubelet[2702]: I0527 03:33:47.197418 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhgq4\" (UniqueName: \"kubernetes.io/projected/3ede4255-edc2-43ea-a2e0-613a76641b48-kube-api-access-fhgq4\") pod \"whisker-5f857bbd57-7clns\" (UID: \"3ede4255-edc2-43ea-a2e0-613a76641b48\") " pod="calico-system/whisker-5f857bbd57-7clns" May 27 03:33:47.197457 kubelet[2702]: I0527 03:33:47.197470 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3ede4255-edc2-43ea-a2e0-613a76641b48-whisker-backend-key-pair\") pod \"whisker-5f857bbd57-7clns\" (UID: \"3ede4255-edc2-43ea-a2e0-613a76641b48\") " pod="calico-system/whisker-5f857bbd57-7clns" May 27 03:33:47.197647 kubelet[2702]: I0527 03:33:47.197495 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ede4255-edc2-43ea-a2e0-613a76641b48-whisker-ca-bundle\") pod \"whisker-5f857bbd57-7clns\" (UID: \"3ede4255-edc2-43ea-a2e0-613a76641b48\") " pod="calico-system/whisker-5f857bbd57-7clns" May 27 03:33:47.491736 containerd[1582]: time="2025-05-27T03:33:47.491677273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f857bbd57-7clns,Uid:3ede4255-edc2-43ea-a2e0-613a76641b48,Namespace:calico-system,Attempt:0,}" May 27 03:33:47.671472 systemd-networkd[1490]: cali38371cd96c3: Link UP May 27 03:33:47.672268 systemd-networkd[1490]: cali38371cd96c3: Gained carrier May 27 03:33:47.686580 containerd[1582]: 2025-05-27 03:33:47.559 [INFO][3831] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 03:33:47.686580 containerd[1582]: 2025-05-27 03:33:47.574 [INFO][3831] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5f857bbd57--7clns-eth0 whisker-5f857bbd57- calico-system 3ede4255-edc2-43ea-a2e0-613a76641b48 884 0 2025-05-27 03:33:47 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5f857bbd57 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5f857bbd57-7clns eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali38371cd96c3 [] [] }} ContainerID="2f09c6977755e7533d00f55c7d5992a59eeddff436117952127ae172567f6fe9" Namespace="calico-system" Pod="whisker-5f857bbd57-7clns" WorkloadEndpoint="localhost-k8s-whisker--5f857bbd57--7clns-" May 27 03:33:47.686580 containerd[1582]: 2025-05-27 03:33:47.574 [INFO][3831] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2f09c6977755e7533d00f55c7d5992a59eeddff436117952127ae172567f6fe9" Namespace="calico-system" Pod="whisker-5f857bbd57-7clns" WorkloadEndpoint="localhost-k8s-whisker--5f857bbd57--7clns-eth0" May 27 03:33:47.686580 containerd[1582]: 2025-05-27 03:33:47.630 [INFO][3846] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2f09c6977755e7533d00f55c7d5992a59eeddff436117952127ae172567f6fe9" HandleID="k8s-pod-network.2f09c6977755e7533d00f55c7d5992a59eeddff436117952127ae172567f6fe9" Workload="localhost-k8s-whisker--5f857bbd57--7clns-eth0" May 27 03:33:47.686928 containerd[1582]: 2025-05-27 03:33:47.631 [INFO][3846] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2f09c6977755e7533d00f55c7d5992a59eeddff436117952127ae172567f6fe9" HandleID="k8s-pod-network.2f09c6977755e7533d00f55c7d5992a59eeddff436117952127ae172567f6fe9" Workload="localhost-k8s-whisker--5f857bbd57--7clns-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002adad0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5f857bbd57-7clns", "timestamp":"2025-05-27 03:33:47.630340602 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:33:47.686928 containerd[1582]: 2025-05-27 03:33:47.631 [INFO][3846] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:33:47.686928 containerd[1582]: 2025-05-27 03:33:47.631 [INFO][3846] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:33:47.686928 containerd[1582]: 2025-05-27 03:33:47.631 [INFO][3846] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 03:33:47.686928 containerd[1582]: 2025-05-27 03:33:47.638 [INFO][3846] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2f09c6977755e7533d00f55c7d5992a59eeddff436117952127ae172567f6fe9" host="localhost" May 27 03:33:47.686928 containerd[1582]: 2025-05-27 03:33:47.642 [INFO][3846] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 03:33:47.686928 containerd[1582]: 2025-05-27 03:33:47.646 [INFO][3846] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 03:33:47.686928 containerd[1582]: 2025-05-27 03:33:47.648 [INFO][3846] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 03:33:47.686928 containerd[1582]: 2025-05-27 03:33:47.650 [INFO][3846] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 03:33:47.686928 containerd[1582]: 2025-05-27 03:33:47.650 [INFO][3846] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2f09c6977755e7533d00f55c7d5992a59eeddff436117952127ae172567f6fe9" host="localhost" May 27 03:33:47.687179 containerd[1582]: 2025-05-27 03:33:47.651 [INFO][3846] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2f09c6977755e7533d00f55c7d5992a59eeddff436117952127ae172567f6fe9 May 27 03:33:47.687179 containerd[1582]: 2025-05-27 03:33:47.657 [INFO][3846] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2f09c6977755e7533d00f55c7d5992a59eeddff436117952127ae172567f6fe9" host="localhost" May 27 03:33:47.687179 containerd[1582]: 2025-05-27 03:33:47.660 [INFO][3846] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.2f09c6977755e7533d00f55c7d5992a59eeddff436117952127ae172567f6fe9" host="localhost" May 27 03:33:47.687179 containerd[1582]: 2025-05-27 03:33:47.660 [INFO][3846] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.2f09c6977755e7533d00f55c7d5992a59eeddff436117952127ae172567f6fe9" host="localhost" May 27 03:33:47.687179 containerd[1582]: 2025-05-27 03:33:47.660 [INFO][3846] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:33:47.687179 containerd[1582]: 2025-05-27 03:33:47.660 [INFO][3846] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="2f09c6977755e7533d00f55c7d5992a59eeddff436117952127ae172567f6fe9" HandleID="k8s-pod-network.2f09c6977755e7533d00f55c7d5992a59eeddff436117952127ae172567f6fe9" Workload="localhost-k8s-whisker--5f857bbd57--7clns-eth0" May 27 03:33:47.687311 containerd[1582]: 2025-05-27 03:33:47.664 [INFO][3831] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2f09c6977755e7533d00f55c7d5992a59eeddff436117952127ae172567f6fe9" Namespace="calico-system" Pod="whisker-5f857bbd57-7clns" WorkloadEndpoint="localhost-k8s-whisker--5f857bbd57--7clns-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5f857bbd57--7clns-eth0", GenerateName:"whisker-5f857bbd57-", Namespace:"calico-system", SelfLink:"", UID:"3ede4255-edc2-43ea-a2e0-613a76641b48", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 33, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5f857bbd57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5f857bbd57-7clns", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali38371cd96c3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:33:47.687311 containerd[1582]: 2025-05-27 03:33:47.664 [INFO][3831] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="2f09c6977755e7533d00f55c7d5992a59eeddff436117952127ae172567f6fe9" Namespace="calico-system" Pod="whisker-5f857bbd57-7clns" WorkloadEndpoint="localhost-k8s-whisker--5f857bbd57--7clns-eth0" May 27 03:33:47.687387 containerd[1582]: 2025-05-27 03:33:47.664 [INFO][3831] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali38371cd96c3 ContainerID="2f09c6977755e7533d00f55c7d5992a59eeddff436117952127ae172567f6fe9" Namespace="calico-system" Pod="whisker-5f857bbd57-7clns" WorkloadEndpoint="localhost-k8s-whisker--5f857bbd57--7clns-eth0" May 27 03:33:47.687387 containerd[1582]: 2025-05-27 03:33:47.672 [INFO][3831] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2f09c6977755e7533d00f55c7d5992a59eeddff436117952127ae172567f6fe9" Namespace="calico-system" Pod="whisker-5f857bbd57-7clns" WorkloadEndpoint="localhost-k8s-whisker--5f857bbd57--7clns-eth0" May 27 03:33:47.687430 containerd[1582]: 2025-05-27 03:33:47.672 [INFO][3831] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2f09c6977755e7533d00f55c7d5992a59eeddff436117952127ae172567f6fe9" Namespace="calico-system" Pod="whisker-5f857bbd57-7clns" WorkloadEndpoint="localhost-k8s-whisker--5f857bbd57--7clns-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5f857bbd57--7clns-eth0", GenerateName:"whisker-5f857bbd57-", Namespace:"calico-system", SelfLink:"", UID:"3ede4255-edc2-43ea-a2e0-613a76641b48", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 33, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5f857bbd57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2f09c6977755e7533d00f55c7d5992a59eeddff436117952127ae172567f6fe9", Pod:"whisker-5f857bbd57-7clns", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali38371cd96c3", MAC:"4e:e4:2d:9d:05:ab", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:33:47.687486 containerd[1582]: 2025-05-27 03:33:47.683 [INFO][3831] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2f09c6977755e7533d00f55c7d5992a59eeddff436117952127ae172567f6fe9" Namespace="calico-system" Pod="whisker-5f857bbd57-7clns" WorkloadEndpoint="localhost-k8s-whisker--5f857bbd57--7clns-eth0" May 27 03:33:48.298836 containerd[1582]: time="2025-05-27T03:33:48.298785898Z" level=info msg="TaskExit event in podsandbox handler container_id:\"28736f24428bcf1f7028055e51dd1e6071e536d4a6a9abc7d6781c84f5dc3ece\" id:\"980e753320d26b118006db3098292c9a1afeec08c7f60c6083dba0127cf6df74\" pid:3961 exit_status:1 exited_at:{seconds:1748316828 nanos:296738088}" May 27 03:33:48.331909 containerd[1582]: time="2025-05-27T03:33:48.331843405Z" level=info msg="connecting to shim 2f09c6977755e7533d00f55c7d5992a59eeddff436117952127ae172567f6fe9" address="unix:///run/containerd/s/3999f6440f7f20bcc3be6b3281698ab53ab5528bedaecb0a7f742954e75d8258" namespace=k8s.io protocol=ttrpc version=3 May 27 03:33:48.384746 systemd[1]: Started cri-containerd-2f09c6977755e7533d00f55c7d5992a59eeddff436117952127ae172567f6fe9.scope - libcontainer container 2f09c6977755e7533d00f55c7d5992a59eeddff436117952127ae172567f6fe9. May 27 03:33:48.399242 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 03:33:48.448526 kubelet[2702]: I0527 03:33:48.448487 2702 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca6521c6-d3ea-4dcf-ba8f-c45d32999754" path="/var/lib/kubelet/pods/ca6521c6-d3ea-4dcf-ba8f-c45d32999754/volumes" May 27 03:33:48.538582 containerd[1582]: time="2025-05-27T03:33:48.538528641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f857bbd57-7clns,Uid:3ede4255-edc2-43ea-a2e0-613a76641b48,Namespace:calico-system,Attempt:0,} returns sandbox id \"2f09c6977755e7533d00f55c7d5992a59eeddff436117952127ae172567f6fe9\"" May 27 03:33:48.540166 containerd[1582]: time="2025-05-27T03:33:48.540130263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 03:33:48.589034 systemd-networkd[1490]: vxlan.calico: Link UP May 27 03:33:48.589220 systemd-networkd[1490]: vxlan.calico: Gained carrier May 27 03:33:48.794557 containerd[1582]: time="2025-05-27T03:33:48.794486754Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:33:48.836571 containerd[1582]: time="2025-05-27T03:33:48.836502084Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 03:33:48.844389 containerd[1582]: time="2025-05-27T03:33:48.844231648Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:33:48.845055 kubelet[2702]: E0527 03:33:48.844465 2702 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:33:48.845055 kubelet[2702]: E0527 03:33:48.844518 2702 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:33:48.850749 kubelet[2702]: E0527 03:33:48.850682 2702 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d7672fcfaa5b47a6ad41dd3d87124757,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fhgq4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f857bbd57-7clns_calico-system(3ede4255-edc2-43ea-a2e0-613a76641b48): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:33:48.852724 containerd[1582]: time="2025-05-27T03:33:48.852552213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 03:33:48.888765 systemd-networkd[1490]: cali38371cd96c3: Gained IPv6LL May 27 03:33:49.137556 containerd[1582]: time="2025-05-27T03:33:49.137440133Z" level=info msg="TaskExit event in podsandbox handler container_id:\"28736f24428bcf1f7028055e51dd1e6071e536d4a6a9abc7d6781c84f5dc3ece\" id:\"deb47a094294ba0d38f83c1667b5d18c5ed6304ed5bb17e0d857bb62a1acfb74\" pid:4140 exit_status:1 exited_at:{seconds:1748316829 nanos:137088581}" May 27 03:33:49.139559 containerd[1582]: time="2025-05-27T03:33:49.139519522Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:33:49.210017 containerd[1582]: time="2025-05-27T03:33:49.209929822Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 03:33:49.210157 containerd[1582]: time="2025-05-27T03:33:49.209990106Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:33:49.210838 kubelet[2702]: E0527 03:33:49.210803 2702 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:33:49.210924 kubelet[2702]: E0527 03:33:49.210851 2702 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:33:49.211008 kubelet[2702]: E0527 03:33:49.210950 2702 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fhgq4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f857bbd57-7clns_calico-system(3ede4255-edc2-43ea-a2e0-613a76641b48): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:33:49.212986 kubelet[2702]: E0527 03:33:49.212937 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-5f857bbd57-7clns" podUID="3ede4255-edc2-43ea-a2e0-613a76641b48" May 27 03:33:50.063224 kubelet[2702]: E0527 03:33:50.063159 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-5f857bbd57-7clns" podUID="3ede4255-edc2-43ea-a2e0-613a76641b48" May 27 03:33:50.231786 systemd-networkd[1490]: vxlan.calico: Gained IPv6LL May 27 03:33:50.998501 systemd[1]: Started sshd@7-10.0.0.8:22-10.0.0.1:49032.service - OpenSSH per-connection server daemon (10.0.0.1:49032). May 27 03:33:51.046101 sshd[4162]: Accepted publickey for core from 10.0.0.1 port 49032 ssh2: RSA SHA256:28Bggi7Fgl5ol89PGYBtCkx+o5rLsXiIKLwfpE1JZmQ May 27 03:33:51.047412 sshd-session[4162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:33:51.051549 systemd-logind[1563]: New session 8 of user core. May 27 03:33:51.060742 systemd[1]: Started session-8.scope - Session 8 of User core. May 27 03:33:51.187637 sshd[4164]: Connection closed by 10.0.0.1 port 49032 May 27 03:33:51.187906 sshd-session[4162]: pam_unix(sshd:session): session closed for user core May 27 03:33:51.192369 systemd[1]: sshd@7-10.0.0.8:22-10.0.0.1:49032.service: Deactivated successfully. May 27 03:33:51.194345 systemd[1]: session-8.scope: Deactivated successfully. May 27 03:33:51.195155 systemd-logind[1563]: Session 8 logged out. Waiting for processes to exit. May 27 03:33:51.196364 systemd-logind[1563]: Removed session 8. May 27 03:33:51.446236 kubelet[2702]: E0527 03:33:51.446202 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:33:51.446892 containerd[1582]: time="2025-05-27T03:33:51.446852153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sdsr9,Uid:5b3df184-106d-4a00-9323-ce4ef282b010,Namespace:kube-system,Attempt:0,}" May 27 03:33:51.532312 systemd-networkd[1490]: cali1035656ebde: Link UP May 27 03:33:51.532737 systemd-networkd[1490]: cali1035656ebde: Gained carrier May 27 03:33:51.543779 containerd[1582]: 2025-05-27 03:33:51.480 [INFO][4178] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--sdsr9-eth0 coredns-668d6bf9bc- kube-system 5b3df184-106d-4a00-9323-ce4ef282b010 803 0 2025-05-27 03:33:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-sdsr9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1035656ebde [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="106fe57147c4e1807f3256088c4d2a4f69c654c6165503531513a97c679bf5e7" Namespace="kube-system" Pod="coredns-668d6bf9bc-sdsr9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--sdsr9-" May 27 03:33:51.543779 containerd[1582]: 2025-05-27 03:33:51.480 [INFO][4178] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="106fe57147c4e1807f3256088c4d2a4f69c654c6165503531513a97c679bf5e7" Namespace="kube-system" Pod="coredns-668d6bf9bc-sdsr9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--sdsr9-eth0" May 27 03:33:51.543779 containerd[1582]: 2025-05-27 03:33:51.502 [INFO][4193] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="106fe57147c4e1807f3256088c4d2a4f69c654c6165503531513a97c679bf5e7" HandleID="k8s-pod-network.106fe57147c4e1807f3256088c4d2a4f69c654c6165503531513a97c679bf5e7" Workload="localhost-k8s-coredns--668d6bf9bc--sdsr9-eth0" May 27 03:33:51.544004 containerd[1582]: 2025-05-27 03:33:51.502 [INFO][4193] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="106fe57147c4e1807f3256088c4d2a4f69c654c6165503531513a97c679bf5e7" HandleID="k8s-pod-network.106fe57147c4e1807f3256088c4d2a4f69c654c6165503531513a97c679bf5e7" Workload="localhost-k8s-coredns--668d6bf9bc--sdsr9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00050cb30), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-sdsr9", "timestamp":"2025-05-27 03:33:51.502515982 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:33:51.544004 containerd[1582]: 2025-05-27 03:33:51.502 [INFO][4193] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:33:51.544004 containerd[1582]: 2025-05-27 03:33:51.502 [INFO][4193] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:33:51.544004 containerd[1582]: 2025-05-27 03:33:51.502 [INFO][4193] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 03:33:51.544004 containerd[1582]: 2025-05-27 03:33:51.508 [INFO][4193] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.106fe57147c4e1807f3256088c4d2a4f69c654c6165503531513a97c679bf5e7" host="localhost" May 27 03:33:51.544004 containerd[1582]: 2025-05-27 03:33:51.512 [INFO][4193] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 03:33:51.544004 containerd[1582]: 2025-05-27 03:33:51.515 [INFO][4193] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 03:33:51.544004 containerd[1582]: 2025-05-27 03:33:51.517 [INFO][4193] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 03:33:51.544004 containerd[1582]: 2025-05-27 03:33:51.518 [INFO][4193] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 03:33:51.544004 containerd[1582]: 2025-05-27 03:33:51.518 [INFO][4193] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.106fe57147c4e1807f3256088c4d2a4f69c654c6165503531513a97c679bf5e7" host="localhost" May 27 03:33:51.544233 containerd[1582]: 2025-05-27 03:33:51.520 [INFO][4193] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.106fe57147c4e1807f3256088c4d2a4f69c654c6165503531513a97c679bf5e7 May 27 03:33:51.544233 containerd[1582]: 2025-05-27 03:33:51.523 [INFO][4193] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.106fe57147c4e1807f3256088c4d2a4f69c654c6165503531513a97c679bf5e7" host="localhost" May 27 03:33:51.544233 containerd[1582]: 2025-05-27 03:33:51.527 [INFO][4193] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.106fe57147c4e1807f3256088c4d2a4f69c654c6165503531513a97c679bf5e7" host="localhost" May 27 03:33:51.544233 containerd[1582]: 2025-05-27 03:33:51.527 [INFO][4193] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.106fe57147c4e1807f3256088c4d2a4f69c654c6165503531513a97c679bf5e7" host="localhost" May 27 03:33:51.544233 containerd[1582]: 2025-05-27 03:33:51.527 [INFO][4193] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:33:51.544233 containerd[1582]: 2025-05-27 03:33:51.527 [INFO][4193] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="106fe57147c4e1807f3256088c4d2a4f69c654c6165503531513a97c679bf5e7" HandleID="k8s-pod-network.106fe57147c4e1807f3256088c4d2a4f69c654c6165503531513a97c679bf5e7" Workload="localhost-k8s-coredns--668d6bf9bc--sdsr9-eth0" May 27 03:33:51.544351 containerd[1582]: 2025-05-27 03:33:51.530 [INFO][4178] cni-plugin/k8s.go 418: Populated endpoint ContainerID="106fe57147c4e1807f3256088c4d2a4f69c654c6165503531513a97c679bf5e7" Namespace="kube-system" Pod="coredns-668d6bf9bc-sdsr9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--sdsr9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--sdsr9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5b3df184-106d-4a00-9323-ce4ef282b010", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 33, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-sdsr9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1035656ebde", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:33:51.544407 containerd[1582]: 2025-05-27 03:33:51.530 [INFO][4178] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="106fe57147c4e1807f3256088c4d2a4f69c654c6165503531513a97c679bf5e7" Namespace="kube-system" Pod="coredns-668d6bf9bc-sdsr9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--sdsr9-eth0" May 27 03:33:51.544407 containerd[1582]: 2025-05-27 03:33:51.530 [INFO][4178] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1035656ebde ContainerID="106fe57147c4e1807f3256088c4d2a4f69c654c6165503531513a97c679bf5e7" Namespace="kube-system" Pod="coredns-668d6bf9bc-sdsr9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--sdsr9-eth0" May 27 03:33:51.544407 containerd[1582]: 2025-05-27 03:33:51.533 [INFO][4178] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="106fe57147c4e1807f3256088c4d2a4f69c654c6165503531513a97c679bf5e7" Namespace="kube-system" Pod="coredns-668d6bf9bc-sdsr9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--sdsr9-eth0" May 27 03:33:51.544477 containerd[1582]: 2025-05-27 03:33:51.533 [INFO][4178] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="106fe57147c4e1807f3256088c4d2a4f69c654c6165503531513a97c679bf5e7" Namespace="kube-system" Pod="coredns-668d6bf9bc-sdsr9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--sdsr9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--sdsr9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5b3df184-106d-4a00-9323-ce4ef282b010", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 33, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"106fe57147c4e1807f3256088c4d2a4f69c654c6165503531513a97c679bf5e7", Pod:"coredns-668d6bf9bc-sdsr9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1035656ebde", MAC:"a2:7c:44:2e:39:ae", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:33:51.544477 containerd[1582]: 2025-05-27 03:33:51.539 [INFO][4178] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="106fe57147c4e1807f3256088c4d2a4f69c654c6165503531513a97c679bf5e7" Namespace="kube-system" Pod="coredns-668d6bf9bc-sdsr9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--sdsr9-eth0" May 27 03:33:51.570732 containerd[1582]: time="2025-05-27T03:33:51.570681928Z" level=info msg="connecting to shim 106fe57147c4e1807f3256088c4d2a4f69c654c6165503531513a97c679bf5e7" address="unix:///run/containerd/s/3a4c7ff734c3689c5d006f9f235cdcd816bbaabe26f424dfaecf98c65279bb22" namespace=k8s.io protocol=ttrpc version=3 May 27 03:33:51.602134 systemd[1]: Started cri-containerd-106fe57147c4e1807f3256088c4d2a4f69c654c6165503531513a97c679bf5e7.scope - libcontainer container 106fe57147c4e1807f3256088c4d2a4f69c654c6165503531513a97c679bf5e7. May 27 03:33:51.614549 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 03:33:51.643884 containerd[1582]: time="2025-05-27T03:33:51.643834205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sdsr9,Uid:5b3df184-106d-4a00-9323-ce4ef282b010,Namespace:kube-system,Attempt:0,} returns sandbox id \"106fe57147c4e1807f3256088c4d2a4f69c654c6165503531513a97c679bf5e7\"" May 27 03:33:51.644553 kubelet[2702]: E0527 03:33:51.644528 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:33:51.647188 containerd[1582]: time="2025-05-27T03:33:51.647148746Z" level=info msg="CreateContainer within sandbox \"106fe57147c4e1807f3256088c4d2a4f69c654c6165503531513a97c679bf5e7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 03:33:51.658972 containerd[1582]: time="2025-05-27T03:33:51.658497385Z" level=info msg="Container f682b957088c30c2b0621e1bd86281e33d952e638743ba10e73c3dbf97139299: CDI devices from CRI Config.CDIDevices: []" May 27 03:33:51.664930 containerd[1582]: time="2025-05-27T03:33:51.664884049Z" level=info msg="CreateContainer within sandbox \"106fe57147c4e1807f3256088c4d2a4f69c654c6165503531513a97c679bf5e7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f682b957088c30c2b0621e1bd86281e33d952e638743ba10e73c3dbf97139299\"" May 27 03:33:51.665400 containerd[1582]: time="2025-05-27T03:33:51.665341297Z" level=info msg="StartContainer for \"f682b957088c30c2b0621e1bd86281e33d952e638743ba10e73c3dbf97139299\"" May 27 03:33:51.666333 containerd[1582]: time="2025-05-27T03:33:51.666297064Z" level=info msg="connecting to shim f682b957088c30c2b0621e1bd86281e33d952e638743ba10e73c3dbf97139299" address="unix:///run/containerd/s/3a4c7ff734c3689c5d006f9f235cdcd816bbaabe26f424dfaecf98c65279bb22" protocol=ttrpc version=3 May 27 03:33:51.695750 systemd[1]: Started cri-containerd-f682b957088c30c2b0621e1bd86281e33d952e638743ba10e73c3dbf97139299.scope - libcontainer container f682b957088c30c2b0621e1bd86281e33d952e638743ba10e73c3dbf97139299. May 27 03:33:51.724879 containerd[1582]: time="2025-05-27T03:33:51.724445833Z" level=info msg="StartContainer for \"f682b957088c30c2b0621e1bd86281e33d952e638743ba10e73c3dbf97139299\" returns successfully" May 27 03:33:52.066367 kubelet[2702]: E0527 03:33:52.066120 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:33:52.084231 kubelet[2702]: I0527 03:33:52.083230 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-sdsr9" podStartSLOduration=37.083213139 podStartE2EDuration="37.083213139s" podCreationTimestamp="2025-05-27 03:33:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:33:52.075255393 +0000 UTC m=+43.710355253" watchObservedRunningTime="2025-05-27 03:33:52.083213139 +0000 UTC m=+43.718312999" May 27 03:33:52.446419 containerd[1582]: time="2025-05-27T03:33:52.446373814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-kszft,Uid:89747668-1d74-49e4-b34b-55cf8d01980a,Namespace:calico-system,Attempt:0,}" May 27 03:33:52.534814 systemd-networkd[1490]: califacdee651d1: Link UP May 27 03:33:52.535470 systemd-networkd[1490]: califacdee651d1: Gained carrier May 27 03:33:52.549830 containerd[1582]: 2025-05-27 03:33:52.479 [INFO][4292] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--78d55f7ddc--kszft-eth0 goldmane-78d55f7ddc- calico-system 89747668-1d74-49e4-b34b-55cf8d01980a 806 0 2025-05-27 03:33:26 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:78d55f7ddc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-78d55f7ddc-kszft eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] califacdee651d1 [] [] }} ContainerID="2cb5a34c280258e823778ad57b00b1bda0e79259b91ac4e3cbd1cd5b070ebc09" Namespace="calico-system" Pod="goldmane-78d55f7ddc-kszft" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--kszft-" May 27 03:33:52.549830 containerd[1582]: 2025-05-27 03:33:52.479 [INFO][4292] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2cb5a34c280258e823778ad57b00b1bda0e79259b91ac4e3cbd1cd5b070ebc09" Namespace="calico-system" Pod="goldmane-78d55f7ddc-kszft" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--kszft-eth0" May 27 03:33:52.549830 containerd[1582]: 2025-05-27 03:33:52.502 [INFO][4308] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2cb5a34c280258e823778ad57b00b1bda0e79259b91ac4e3cbd1cd5b070ebc09" HandleID="k8s-pod-network.2cb5a34c280258e823778ad57b00b1bda0e79259b91ac4e3cbd1cd5b070ebc09" Workload="localhost-k8s-goldmane--78d55f7ddc--kszft-eth0" May 27 03:33:52.549830 containerd[1582]: 2025-05-27 03:33:52.503 [INFO][4308] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2cb5a34c280258e823778ad57b00b1bda0e79259b91ac4e3cbd1cd5b070ebc09" HandleID="k8s-pod-network.2cb5a34c280258e823778ad57b00b1bda0e79259b91ac4e3cbd1cd5b070ebc09" Workload="localhost-k8s-goldmane--78d55f7ddc--kszft-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138ec0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-78d55f7ddc-kszft", "timestamp":"2025-05-27 03:33:52.502940567 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:33:52.549830 containerd[1582]: 2025-05-27 03:33:52.503 [INFO][4308] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:33:52.549830 containerd[1582]: 2025-05-27 03:33:52.503 [INFO][4308] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:33:52.549830 containerd[1582]: 2025-05-27 03:33:52.503 [INFO][4308] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 03:33:52.549830 containerd[1582]: 2025-05-27 03:33:52.508 [INFO][4308] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2cb5a34c280258e823778ad57b00b1bda0e79259b91ac4e3cbd1cd5b070ebc09" host="localhost" May 27 03:33:52.549830 containerd[1582]: 2025-05-27 03:33:52.514 [INFO][4308] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 03:33:52.549830 containerd[1582]: 2025-05-27 03:33:52.518 [INFO][4308] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 03:33:52.549830 containerd[1582]: 2025-05-27 03:33:52.519 [INFO][4308] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 03:33:52.549830 containerd[1582]: 2025-05-27 03:33:52.521 [INFO][4308] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 03:33:52.549830 containerd[1582]: 2025-05-27 03:33:52.521 [INFO][4308] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2cb5a34c280258e823778ad57b00b1bda0e79259b91ac4e3cbd1cd5b070ebc09" host="localhost" May 27 03:33:52.549830 containerd[1582]: 2025-05-27 03:33:52.522 [INFO][4308] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2cb5a34c280258e823778ad57b00b1bda0e79259b91ac4e3cbd1cd5b070ebc09 May 27 03:33:52.549830 containerd[1582]: 2025-05-27 03:33:52.526 [INFO][4308] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2cb5a34c280258e823778ad57b00b1bda0e79259b91ac4e3cbd1cd5b070ebc09" host="localhost" May 27 03:33:52.549830 containerd[1582]: 2025-05-27 03:33:52.529 [INFO][4308] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.2cb5a34c280258e823778ad57b00b1bda0e79259b91ac4e3cbd1cd5b070ebc09" host="localhost" May 27 03:33:52.549830 containerd[1582]: 2025-05-27 03:33:52.530 [INFO][4308] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.2cb5a34c280258e823778ad57b00b1bda0e79259b91ac4e3cbd1cd5b070ebc09" host="localhost" May 27 03:33:52.549830 containerd[1582]: 2025-05-27 03:33:52.530 [INFO][4308] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:33:52.549830 containerd[1582]: 2025-05-27 03:33:52.530 [INFO][4308] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="2cb5a34c280258e823778ad57b00b1bda0e79259b91ac4e3cbd1cd5b070ebc09" HandleID="k8s-pod-network.2cb5a34c280258e823778ad57b00b1bda0e79259b91ac4e3cbd1cd5b070ebc09" Workload="localhost-k8s-goldmane--78d55f7ddc--kszft-eth0" May 27 03:33:52.550832 containerd[1582]: 2025-05-27 03:33:52.533 [INFO][4292] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2cb5a34c280258e823778ad57b00b1bda0e79259b91ac4e3cbd1cd5b070ebc09" Namespace="calico-system" Pod="goldmane-78d55f7ddc-kszft" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--kszft-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--78d55f7ddc--kszft-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"89747668-1d74-49e4-b34b-55cf8d01980a", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 33, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-78d55f7ddc-kszft", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califacdee651d1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:33:52.550832 containerd[1582]: 2025-05-27 03:33:52.533 [INFO][4292] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="2cb5a34c280258e823778ad57b00b1bda0e79259b91ac4e3cbd1cd5b070ebc09" Namespace="calico-system" Pod="goldmane-78d55f7ddc-kszft" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--kszft-eth0" May 27 03:33:52.550832 containerd[1582]: 2025-05-27 03:33:52.533 [INFO][4292] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califacdee651d1 ContainerID="2cb5a34c280258e823778ad57b00b1bda0e79259b91ac4e3cbd1cd5b070ebc09" Namespace="calico-system" Pod="goldmane-78d55f7ddc-kszft" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--kszft-eth0" May 27 03:33:52.550832 containerd[1582]: 2025-05-27 03:33:52.535 [INFO][4292] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2cb5a34c280258e823778ad57b00b1bda0e79259b91ac4e3cbd1cd5b070ebc09" Namespace="calico-system" Pod="goldmane-78d55f7ddc-kszft" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--kszft-eth0" May 27 03:33:52.550832 containerd[1582]: 2025-05-27 03:33:52.536 [INFO][4292] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2cb5a34c280258e823778ad57b00b1bda0e79259b91ac4e3cbd1cd5b070ebc09" Namespace="calico-system" Pod="goldmane-78d55f7ddc-kszft" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--kszft-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--78d55f7ddc--kszft-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"89747668-1d74-49e4-b34b-55cf8d01980a", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 33, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2cb5a34c280258e823778ad57b00b1bda0e79259b91ac4e3cbd1cd5b070ebc09", Pod:"goldmane-78d55f7ddc-kszft", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califacdee651d1", MAC:"02:2e:b0:57:3c:bc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:33:52.550832 containerd[1582]: 2025-05-27 03:33:52.546 [INFO][4292] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2cb5a34c280258e823778ad57b00b1bda0e79259b91ac4e3cbd1cd5b070ebc09" Namespace="calico-system" Pod="goldmane-78d55f7ddc-kszft" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--kszft-eth0" May 27 03:33:52.580940 containerd[1582]: time="2025-05-27T03:33:52.580898699Z" level=info msg="connecting to shim 2cb5a34c280258e823778ad57b00b1bda0e79259b91ac4e3cbd1cd5b070ebc09" address="unix:///run/containerd/s/233de2fcc275fd795771bf9f6feaa557bbdb6c8a16d98514e126226514169996" namespace=k8s.io protocol=ttrpc version=3 May 27 03:33:52.609748 systemd[1]: Started cri-containerd-2cb5a34c280258e823778ad57b00b1bda0e79259b91ac4e3cbd1cd5b070ebc09.scope - libcontainer container 2cb5a34c280258e823778ad57b00b1bda0e79259b91ac4e3cbd1cd5b070ebc09. May 27 03:33:52.622554 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 03:33:52.666030 containerd[1582]: time="2025-05-27T03:33:52.665972823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-kszft,Uid:89747668-1d74-49e4-b34b-55cf8d01980a,Namespace:calico-system,Attempt:0,} returns sandbox id \"2cb5a34c280258e823778ad57b00b1bda0e79259b91ac4e3cbd1cd5b070ebc09\"" May 27 03:33:52.667383 containerd[1582]: time="2025-05-27T03:33:52.667345994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 03:33:52.906520 containerd[1582]: time="2025-05-27T03:33:52.906343968Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:33:52.907678 containerd[1582]: time="2025-05-27T03:33:52.907627922Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:33:52.907776 containerd[1582]: time="2025-05-27T03:33:52.907675150Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 03:33:52.907899 kubelet[2702]: E0527 03:33:52.907851 2702 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:33:52.907899 kubelet[2702]: E0527 03:33:52.907895 2702 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:33:52.911101 kubelet[2702]: E0527 03:33:52.911039 2702 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vwszz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-kszft_calico-system(89747668-1d74-49e4-b34b-55cf8d01980a): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:33:52.912254 kubelet[2702]: E0527 03:33:52.912211 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-kszft" podUID="89747668-1d74-49e4-b34b-55cf8d01980a" May 27 03:33:53.075049 kubelet[2702]: E0527 03:33:53.075010 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:33:53.076064 kubelet[2702]: E0527 03:33:53.076009 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-kszft" podUID="89747668-1d74-49e4-b34b-55cf8d01980a" May 27 03:33:53.239777 systemd-networkd[1490]: cali1035656ebde: Gained IPv6LL May 27 03:33:53.446540 containerd[1582]: time="2025-05-27T03:33:53.446483833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bf56db757-xknc5,Uid:f4b9f017-8bb5-4cec-a91d-73aeff0fac1b,Namespace:calico-apiserver,Attempt:0,}" May 27 03:33:53.446703 containerd[1582]: time="2025-05-27T03:33:53.446531152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b65b8b67-vrhgb,Uid:f3cdcef0-8daf-4680-9267-582dfa8e22eb,Namespace:calico-system,Attempt:0,}" May 27 03:33:53.446703 containerd[1582]: time="2025-05-27T03:33:53.446483853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xkksp,Uid:3c1deac8-af48-4170-a1b6-cf33ec7da6f0,Namespace:calico-system,Attempt:0,}" May 27 03:33:53.554903 systemd-networkd[1490]: cali400ff2bc95d: Link UP May 27 03:33:53.555161 systemd-networkd[1490]: cali400ff2bc95d: Gained carrier May 27 03:33:53.569759 containerd[1582]: 2025-05-27 03:33:53.492 [INFO][4385] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--xkksp-eth0 csi-node-driver- calico-system 3c1deac8-af48-4170-a1b6-cf33ec7da6f0 693 0 2025-05-27 03:33:27 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78f6f74485 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-xkksp eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali400ff2bc95d [] [] }} ContainerID="5aa9b4aa046745f0babe035cc59d8b3a226d3def1974cbc7942f8d9ee3a736df" Namespace="calico-system" Pod="csi-node-driver-xkksp" WorkloadEndpoint="localhost-k8s-csi--node--driver--xkksp-" May 27 03:33:53.569759 containerd[1582]: 2025-05-27 03:33:53.492 [INFO][4385] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5aa9b4aa046745f0babe035cc59d8b3a226d3def1974cbc7942f8d9ee3a736df" Namespace="calico-system" Pod="csi-node-driver-xkksp" WorkloadEndpoint="localhost-k8s-csi--node--driver--xkksp-eth0" May 27 03:33:53.569759 containerd[1582]: 2025-05-27 03:33:53.520 [INFO][4423] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5aa9b4aa046745f0babe035cc59d8b3a226d3def1974cbc7942f8d9ee3a736df" HandleID="k8s-pod-network.5aa9b4aa046745f0babe035cc59d8b3a226d3def1974cbc7942f8d9ee3a736df" Workload="localhost-k8s-csi--node--driver--xkksp-eth0" May 27 03:33:53.569759 containerd[1582]: 2025-05-27 03:33:53.520 [INFO][4423] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5aa9b4aa046745f0babe035cc59d8b3a226d3def1974cbc7942f8d9ee3a736df" HandleID="k8s-pod-network.5aa9b4aa046745f0babe035cc59d8b3a226d3def1974cbc7942f8d9ee3a736df" Workload="localhost-k8s-csi--node--driver--xkksp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003e3010), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-xkksp", "timestamp":"2025-05-27 03:33:53.52029758 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:33:53.569759 containerd[1582]: 2025-05-27 03:33:53.520 [INFO][4423] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:33:53.569759 containerd[1582]: 2025-05-27 03:33:53.520 [INFO][4423] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:33:53.569759 containerd[1582]: 2025-05-27 03:33:53.520 [INFO][4423] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 03:33:53.569759 containerd[1582]: 2025-05-27 03:33:53.526 [INFO][4423] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5aa9b4aa046745f0babe035cc59d8b3a226d3def1974cbc7942f8d9ee3a736df" host="localhost" May 27 03:33:53.569759 containerd[1582]: 2025-05-27 03:33:53.530 [INFO][4423] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 03:33:53.569759 containerd[1582]: 2025-05-27 03:33:53.534 [INFO][4423] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 03:33:53.569759 containerd[1582]: 2025-05-27 03:33:53.535 [INFO][4423] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 03:33:53.569759 containerd[1582]: 2025-05-27 03:33:53.537 [INFO][4423] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 03:33:53.569759 containerd[1582]: 2025-05-27 03:33:53.537 [INFO][4423] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5aa9b4aa046745f0babe035cc59d8b3a226d3def1974cbc7942f8d9ee3a736df" host="localhost" May 27 03:33:53.569759 containerd[1582]: 2025-05-27 03:33:53.539 [INFO][4423] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5aa9b4aa046745f0babe035cc59d8b3a226d3def1974cbc7942f8d9ee3a736df May 27 03:33:53.569759 containerd[1582]: 2025-05-27 03:33:53.543 [INFO][4423] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5aa9b4aa046745f0babe035cc59d8b3a226d3def1974cbc7942f8d9ee3a736df" host="localhost" May 27 03:33:53.569759 containerd[1582]: 2025-05-27 03:33:53.548 [INFO][4423] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.5aa9b4aa046745f0babe035cc59d8b3a226d3def1974cbc7942f8d9ee3a736df" host="localhost" May 27 03:33:53.569759 containerd[1582]: 2025-05-27 03:33:53.549 [INFO][4423] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.5aa9b4aa046745f0babe035cc59d8b3a226d3def1974cbc7942f8d9ee3a736df" host="localhost" May 27 03:33:53.569759 containerd[1582]: 2025-05-27 03:33:53.549 [INFO][4423] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:33:53.569759 containerd[1582]: 2025-05-27 03:33:53.549 [INFO][4423] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="5aa9b4aa046745f0babe035cc59d8b3a226d3def1974cbc7942f8d9ee3a736df" HandleID="k8s-pod-network.5aa9b4aa046745f0babe035cc59d8b3a226d3def1974cbc7942f8d9ee3a736df" Workload="localhost-k8s-csi--node--driver--xkksp-eth0" May 27 03:33:53.570865 containerd[1582]: 2025-05-27 03:33:53.552 [INFO][4385] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5aa9b4aa046745f0babe035cc59d8b3a226d3def1974cbc7942f8d9ee3a736df" Namespace="calico-system" Pod="csi-node-driver-xkksp" WorkloadEndpoint="localhost-k8s-csi--node--driver--xkksp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xkksp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3c1deac8-af48-4170-a1b6-cf33ec7da6f0", ResourceVersion:"693", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 33, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-xkksp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali400ff2bc95d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:33:53.570865 containerd[1582]: 2025-05-27 03:33:53.552 [INFO][4385] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="5aa9b4aa046745f0babe035cc59d8b3a226d3def1974cbc7942f8d9ee3a736df" Namespace="calico-system" Pod="csi-node-driver-xkksp" WorkloadEndpoint="localhost-k8s-csi--node--driver--xkksp-eth0" May 27 03:33:53.570865 containerd[1582]: 2025-05-27 03:33:53.552 [INFO][4385] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali400ff2bc95d ContainerID="5aa9b4aa046745f0babe035cc59d8b3a226d3def1974cbc7942f8d9ee3a736df" Namespace="calico-system" Pod="csi-node-driver-xkksp" WorkloadEndpoint="localhost-k8s-csi--node--driver--xkksp-eth0" May 27 03:33:53.570865 containerd[1582]: 2025-05-27 03:33:53.555 [INFO][4385] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5aa9b4aa046745f0babe035cc59d8b3a226d3def1974cbc7942f8d9ee3a736df" Namespace="calico-system" Pod="csi-node-driver-xkksp" WorkloadEndpoint="localhost-k8s-csi--node--driver--xkksp-eth0" May 27 03:33:53.570865 containerd[1582]: 2025-05-27 03:33:53.555 [INFO][4385] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5aa9b4aa046745f0babe035cc59d8b3a226d3def1974cbc7942f8d9ee3a736df" Namespace="calico-system" Pod="csi-node-driver-xkksp" WorkloadEndpoint="localhost-k8s-csi--node--driver--xkksp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xkksp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3c1deac8-af48-4170-a1b6-cf33ec7da6f0", ResourceVersion:"693", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 33, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5aa9b4aa046745f0babe035cc59d8b3a226d3def1974cbc7942f8d9ee3a736df", Pod:"csi-node-driver-xkksp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali400ff2bc95d", MAC:"be:dd:4f:91:44:de", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:33:53.570865 containerd[1582]: 2025-05-27 03:33:53.565 [INFO][4385] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5aa9b4aa046745f0babe035cc59d8b3a226d3def1974cbc7942f8d9ee3a736df" Namespace="calico-system" Pod="csi-node-driver-xkksp" WorkloadEndpoint="localhost-k8s-csi--node--driver--xkksp-eth0" May 27 03:33:53.596542 containerd[1582]: time="2025-05-27T03:33:53.596490749Z" level=info msg="connecting to shim 5aa9b4aa046745f0babe035cc59d8b3a226d3def1974cbc7942f8d9ee3a736df" address="unix:///run/containerd/s/36244e87ef042ad8c07a5ce55f1493b96cfe4d645c26cc38efe944fb55e9d517" namespace=k8s.io protocol=ttrpc version=3 May 27 03:33:53.632849 systemd[1]: Started cri-containerd-5aa9b4aa046745f0babe035cc59d8b3a226d3def1974cbc7942f8d9ee3a736df.scope - libcontainer container 5aa9b4aa046745f0babe035cc59d8b3a226d3def1974cbc7942f8d9ee3a736df. May 27 03:33:53.647832 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 03:33:53.663772 systemd-networkd[1490]: caliae2755f8f23: Link UP May 27 03:33:53.664285 systemd-networkd[1490]: caliae2755f8f23: Gained carrier May 27 03:33:53.668761 containerd[1582]: time="2025-05-27T03:33:53.668705893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xkksp,Uid:3c1deac8-af48-4170-a1b6-cf33ec7da6f0,Namespace:calico-system,Attempt:0,} returns sandbox id \"5aa9b4aa046745f0babe035cc59d8b3a226d3def1974cbc7942f8d9ee3a736df\"" May 27 03:33:53.670346 containerd[1582]: time="2025-05-27T03:33:53.670301831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 27 03:33:53.676379 containerd[1582]: 2025-05-27 03:33:53.487 [INFO][4375] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6bf56db757--xknc5-eth0 calico-apiserver-6bf56db757- calico-apiserver f4b9f017-8bb5-4cec-a91d-73aeff0fac1b 813 0 2025-05-27 03:33:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6bf56db757 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6bf56db757-xknc5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliae2755f8f23 [] [] }} ContainerID="e810d6d1ae581e6c863252b520db60d50026fdd287c1442da0ed57f9989516c6" Namespace="calico-apiserver" Pod="calico-apiserver-6bf56db757-xknc5" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bf56db757--xknc5-" May 27 03:33:53.676379 containerd[1582]: 2025-05-27 03:33:53.487 [INFO][4375] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e810d6d1ae581e6c863252b520db60d50026fdd287c1442da0ed57f9989516c6" Namespace="calico-apiserver" Pod="calico-apiserver-6bf56db757-xknc5" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bf56db757--xknc5-eth0" May 27 03:33:53.676379 containerd[1582]: 2025-05-27 03:33:53.525 [INFO][4421] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e810d6d1ae581e6c863252b520db60d50026fdd287c1442da0ed57f9989516c6" HandleID="k8s-pod-network.e810d6d1ae581e6c863252b520db60d50026fdd287c1442da0ed57f9989516c6" Workload="localhost-k8s-calico--apiserver--6bf56db757--xknc5-eth0" May 27 03:33:53.676379 containerd[1582]: 2025-05-27 03:33:53.525 [INFO][4421] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e810d6d1ae581e6c863252b520db60d50026fdd287c1442da0ed57f9989516c6" HandleID="k8s-pod-network.e810d6d1ae581e6c863252b520db60d50026fdd287c1442da0ed57f9989516c6" Workload="localhost-k8s-calico--apiserver--6bf56db757--xknc5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e33d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6bf56db757-xknc5", "timestamp":"2025-05-27 03:33:53.525136583 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:33:53.676379 containerd[1582]: 2025-05-27 03:33:53.525 [INFO][4421] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:33:53.676379 containerd[1582]: 2025-05-27 03:33:53.549 [INFO][4421] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:33:53.676379 containerd[1582]: 2025-05-27 03:33:53.549 [INFO][4421] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 03:33:53.676379 containerd[1582]: 2025-05-27 03:33:53.628 [INFO][4421] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e810d6d1ae581e6c863252b520db60d50026fdd287c1442da0ed57f9989516c6" host="localhost" May 27 03:33:53.676379 containerd[1582]: 2025-05-27 03:33:53.632 [INFO][4421] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 03:33:53.676379 containerd[1582]: 2025-05-27 03:33:53.636 [INFO][4421] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 03:33:53.676379 containerd[1582]: 2025-05-27 03:33:53.638 [INFO][4421] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 03:33:53.676379 containerd[1582]: 2025-05-27 03:33:53.640 [INFO][4421] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 03:33:53.676379 containerd[1582]: 2025-05-27 03:33:53.640 [INFO][4421] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e810d6d1ae581e6c863252b520db60d50026fdd287c1442da0ed57f9989516c6" host="localhost" May 27 03:33:53.676379 containerd[1582]: 2025-05-27 03:33:53.642 [INFO][4421] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e810d6d1ae581e6c863252b520db60d50026fdd287c1442da0ed57f9989516c6 May 27 03:33:53.676379 containerd[1582]: 2025-05-27 03:33:53.645 [INFO][4421] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e810d6d1ae581e6c863252b520db60d50026fdd287c1442da0ed57f9989516c6" host="localhost" May 27 03:33:53.676379 containerd[1582]: 2025-05-27 03:33:53.653 [INFO][4421] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.e810d6d1ae581e6c863252b520db60d50026fdd287c1442da0ed57f9989516c6" host="localhost" May 27 03:33:53.676379 containerd[1582]: 2025-05-27 03:33:53.653 [INFO][4421] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.e810d6d1ae581e6c863252b520db60d50026fdd287c1442da0ed57f9989516c6" host="localhost" May 27 03:33:53.676379 containerd[1582]: 2025-05-27 03:33:53.653 [INFO][4421] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:33:53.676379 containerd[1582]: 2025-05-27 03:33:53.654 [INFO][4421] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="e810d6d1ae581e6c863252b520db60d50026fdd287c1442da0ed57f9989516c6" HandleID="k8s-pod-network.e810d6d1ae581e6c863252b520db60d50026fdd287c1442da0ed57f9989516c6" Workload="localhost-k8s-calico--apiserver--6bf56db757--xknc5-eth0" May 27 03:33:53.677051 containerd[1582]: 2025-05-27 03:33:53.658 [INFO][4375] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e810d6d1ae581e6c863252b520db60d50026fdd287c1442da0ed57f9989516c6" Namespace="calico-apiserver" Pod="calico-apiserver-6bf56db757-xknc5" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bf56db757--xknc5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6bf56db757--xknc5-eth0", GenerateName:"calico-apiserver-6bf56db757-", Namespace:"calico-apiserver", SelfLink:"", UID:"f4b9f017-8bb5-4cec-a91d-73aeff0fac1b", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 33, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bf56db757", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6bf56db757-xknc5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliae2755f8f23", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:33:53.677051 containerd[1582]: 2025-05-27 03:33:53.658 [INFO][4375] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="e810d6d1ae581e6c863252b520db60d50026fdd287c1442da0ed57f9989516c6" Namespace="calico-apiserver" Pod="calico-apiserver-6bf56db757-xknc5" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bf56db757--xknc5-eth0" May 27 03:33:53.677051 containerd[1582]: 2025-05-27 03:33:53.658 [INFO][4375] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliae2755f8f23 ContainerID="e810d6d1ae581e6c863252b520db60d50026fdd287c1442da0ed57f9989516c6" Namespace="calico-apiserver" Pod="calico-apiserver-6bf56db757-xknc5" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bf56db757--xknc5-eth0" May 27 03:33:53.677051 containerd[1582]: 2025-05-27 03:33:53.664 [INFO][4375] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e810d6d1ae581e6c863252b520db60d50026fdd287c1442da0ed57f9989516c6" Namespace="calico-apiserver" Pod="calico-apiserver-6bf56db757-xknc5" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bf56db757--xknc5-eth0" May 27 03:33:53.677051 containerd[1582]: 2025-05-27 03:33:53.664 [INFO][4375] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e810d6d1ae581e6c863252b520db60d50026fdd287c1442da0ed57f9989516c6" Namespace="calico-apiserver" Pod="calico-apiserver-6bf56db757-xknc5" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bf56db757--xknc5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6bf56db757--xknc5-eth0", GenerateName:"calico-apiserver-6bf56db757-", Namespace:"calico-apiserver", SelfLink:"", UID:"f4b9f017-8bb5-4cec-a91d-73aeff0fac1b", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 33, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bf56db757", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e810d6d1ae581e6c863252b520db60d50026fdd287c1442da0ed57f9989516c6", Pod:"calico-apiserver-6bf56db757-xknc5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliae2755f8f23", MAC:"f2:85:c1:d3:6d:c6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:33:53.677051 containerd[1582]: 2025-05-27 03:33:53.673 [INFO][4375] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e810d6d1ae581e6c863252b520db60d50026fdd287c1442da0ed57f9989516c6" Namespace="calico-apiserver" Pod="calico-apiserver-6bf56db757-xknc5" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bf56db757--xknc5-eth0" May 27 03:33:53.700140 containerd[1582]: time="2025-05-27T03:33:53.700091126Z" level=info msg="connecting to shim e810d6d1ae581e6c863252b520db60d50026fdd287c1442da0ed57f9989516c6" address="unix:///run/containerd/s/d064d5f96c67991edb465b0d3a71a567b68112701a141c969cad6b567142bdf6" namespace=k8s.io protocol=ttrpc version=3 May 27 03:33:53.725729 systemd[1]: Started cri-containerd-e810d6d1ae581e6c863252b520db60d50026fdd287c1442da0ed57f9989516c6.scope - libcontainer container e810d6d1ae581e6c863252b520db60d50026fdd287c1442da0ed57f9989516c6. May 27 03:33:53.742299 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 03:33:53.760636 systemd-networkd[1490]: cali2b2348214b7: Link UP May 27 03:33:53.762118 systemd-networkd[1490]: cali2b2348214b7: Gained carrier May 27 03:33:53.784030 containerd[1582]: 2025-05-27 03:33:53.495 [INFO][4395] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7b65b8b67--vrhgb-eth0 calico-kube-controllers-7b65b8b67- calico-system f3cdcef0-8daf-4680-9267-582dfa8e22eb 812 0 2025-05-27 03:33:27 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7b65b8b67 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7b65b8b67-vrhgb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2b2348214b7 [] [] }} ContainerID="e26d49551fcce14d47579719aab9a7057f545df83ea2b4b143af5091760a92ba" Namespace="calico-system" Pod="calico-kube-controllers-7b65b8b67-vrhgb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b65b8b67--vrhgb-" May 27 03:33:53.784030 containerd[1582]: 2025-05-27 03:33:53.495 [INFO][4395] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e26d49551fcce14d47579719aab9a7057f545df83ea2b4b143af5091760a92ba" Namespace="calico-system" Pod="calico-kube-controllers-7b65b8b67-vrhgb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b65b8b67--vrhgb-eth0" May 27 03:33:53.784030 containerd[1582]: 2025-05-27 03:33:53.526 [INFO][4434] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e26d49551fcce14d47579719aab9a7057f545df83ea2b4b143af5091760a92ba" HandleID="k8s-pod-network.e26d49551fcce14d47579719aab9a7057f545df83ea2b4b143af5091760a92ba" Workload="localhost-k8s-calico--kube--controllers--7b65b8b67--vrhgb-eth0" May 27 03:33:53.784030 containerd[1582]: 2025-05-27 03:33:53.526 [INFO][4434] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e26d49551fcce14d47579719aab9a7057f545df83ea2b4b143af5091760a92ba" HandleID="k8s-pod-network.e26d49551fcce14d47579719aab9a7057f545df83ea2b4b143af5091760a92ba" Workload="localhost-k8s-calico--kube--controllers--7b65b8b67--vrhgb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f760), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7b65b8b67-vrhgb", "timestamp":"2025-05-27 03:33:53.526779921 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:33:53.784030 containerd[1582]: 2025-05-27 03:33:53.527 [INFO][4434] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:33:53.784030 containerd[1582]: 2025-05-27 03:33:53.654 [INFO][4434] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:33:53.784030 containerd[1582]: 2025-05-27 03:33:53.656 [INFO][4434] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 03:33:53.784030 containerd[1582]: 2025-05-27 03:33:53.727 [INFO][4434] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e26d49551fcce14d47579719aab9a7057f545df83ea2b4b143af5091760a92ba" host="localhost" May 27 03:33:53.784030 containerd[1582]: 2025-05-27 03:33:53.734 [INFO][4434] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 03:33:53.784030 containerd[1582]: 2025-05-27 03:33:53.737 [INFO][4434] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 03:33:53.784030 containerd[1582]: 2025-05-27 03:33:53.739 [INFO][4434] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 03:33:53.784030 containerd[1582]: 2025-05-27 03:33:53.741 [INFO][4434] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 03:33:53.784030 containerd[1582]: 2025-05-27 03:33:53.741 [INFO][4434] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e26d49551fcce14d47579719aab9a7057f545df83ea2b4b143af5091760a92ba" host="localhost" May 27 03:33:53.784030 containerd[1582]: 2025-05-27 03:33:53.743 [INFO][4434] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e26d49551fcce14d47579719aab9a7057f545df83ea2b4b143af5091760a92ba May 27 03:33:53.784030 containerd[1582]: 2025-05-27 03:33:53.748 [INFO][4434] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e26d49551fcce14d47579719aab9a7057f545df83ea2b4b143af5091760a92ba" host="localhost" May 27 03:33:53.784030 containerd[1582]: 2025-05-27 03:33:53.754 [INFO][4434] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.e26d49551fcce14d47579719aab9a7057f545df83ea2b4b143af5091760a92ba" host="localhost" May 27 03:33:53.784030 containerd[1582]: 2025-05-27 03:33:53.754 [INFO][4434] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.e26d49551fcce14d47579719aab9a7057f545df83ea2b4b143af5091760a92ba" host="localhost" May 27 03:33:53.784030 containerd[1582]: 2025-05-27 03:33:53.754 [INFO][4434] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:33:53.784030 containerd[1582]: 2025-05-27 03:33:53.754 [INFO][4434] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="e26d49551fcce14d47579719aab9a7057f545df83ea2b4b143af5091760a92ba" HandleID="k8s-pod-network.e26d49551fcce14d47579719aab9a7057f545df83ea2b4b143af5091760a92ba" Workload="localhost-k8s-calico--kube--controllers--7b65b8b67--vrhgb-eth0" May 27 03:33:53.784838 containerd[1582]: 2025-05-27 03:33:53.757 [INFO][4395] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e26d49551fcce14d47579719aab9a7057f545df83ea2b4b143af5091760a92ba" Namespace="calico-system" Pod="calico-kube-controllers-7b65b8b67-vrhgb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b65b8b67--vrhgb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7b65b8b67--vrhgb-eth0", GenerateName:"calico-kube-controllers-7b65b8b67-", Namespace:"calico-system", SelfLink:"", UID:"f3cdcef0-8daf-4680-9267-582dfa8e22eb", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 33, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b65b8b67", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7b65b8b67-vrhgb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2b2348214b7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:33:53.784838 containerd[1582]: 2025-05-27 03:33:53.757 [INFO][4395] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="e26d49551fcce14d47579719aab9a7057f545df83ea2b4b143af5091760a92ba" Namespace="calico-system" Pod="calico-kube-controllers-7b65b8b67-vrhgb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b65b8b67--vrhgb-eth0" May 27 03:33:53.784838 containerd[1582]: 2025-05-27 03:33:53.757 [INFO][4395] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2b2348214b7 ContainerID="e26d49551fcce14d47579719aab9a7057f545df83ea2b4b143af5091760a92ba" Namespace="calico-system" Pod="calico-kube-controllers-7b65b8b67-vrhgb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b65b8b67--vrhgb-eth0" May 27 03:33:53.784838 containerd[1582]: 2025-05-27 03:33:53.764 [INFO][4395] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e26d49551fcce14d47579719aab9a7057f545df83ea2b4b143af5091760a92ba" Namespace="calico-system" Pod="calico-kube-controllers-7b65b8b67-vrhgb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b65b8b67--vrhgb-eth0" May 27 03:33:53.784838 containerd[1582]: 2025-05-27 03:33:53.765 [INFO][4395] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e26d49551fcce14d47579719aab9a7057f545df83ea2b4b143af5091760a92ba" Namespace="calico-system" Pod="calico-kube-controllers-7b65b8b67-vrhgb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b65b8b67--vrhgb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7b65b8b67--vrhgb-eth0", GenerateName:"calico-kube-controllers-7b65b8b67-", Namespace:"calico-system", SelfLink:"", UID:"f3cdcef0-8daf-4680-9267-582dfa8e22eb", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 33, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b65b8b67", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e26d49551fcce14d47579719aab9a7057f545df83ea2b4b143af5091760a92ba", Pod:"calico-kube-controllers-7b65b8b67-vrhgb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2b2348214b7", MAC:"a6:81:3c:07:25:73", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:33:53.784838 containerd[1582]: 2025-05-27 03:33:53.779 [INFO][4395] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e26d49551fcce14d47579719aab9a7057f545df83ea2b4b143af5091760a92ba" Namespace="calico-system" Pod="calico-kube-controllers-7b65b8b67-vrhgb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b65b8b67--vrhgb-eth0" May 27 03:33:53.789418 containerd[1582]: time="2025-05-27T03:33:53.789370674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bf56db757-xknc5,Uid:f4b9f017-8bb5-4cec-a91d-73aeff0fac1b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"e810d6d1ae581e6c863252b520db60d50026fdd287c1442da0ed57f9989516c6\"" May 27 03:33:53.809791 containerd[1582]: time="2025-05-27T03:33:53.808895812Z" level=info msg="connecting to shim e26d49551fcce14d47579719aab9a7057f545df83ea2b4b143af5091760a92ba" address="unix:///run/containerd/s/dacc3e2e00a0ca1d00104d51b15e79a015f7092deeff8b5e1a4203fa19da9845" namespace=k8s.io protocol=ttrpc version=3 May 27 03:33:53.831738 systemd[1]: Started cri-containerd-e26d49551fcce14d47579719aab9a7057f545df83ea2b4b143af5091760a92ba.scope - libcontainer container e26d49551fcce14d47579719aab9a7057f545df83ea2b4b143af5091760a92ba. May 27 03:33:53.845581 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 03:33:53.876525 containerd[1582]: time="2025-05-27T03:33:53.876491085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b65b8b67-vrhgb,Uid:f3cdcef0-8daf-4680-9267-582dfa8e22eb,Namespace:calico-system,Attempt:0,} returns sandbox id \"e26d49551fcce14d47579719aab9a7057f545df83ea2b4b143af5091760a92ba\"" May 27 03:33:54.080241 kubelet[2702]: E0527 03:33:54.080139 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:33:54.081423 kubelet[2702]: E0527 03:33:54.081375 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-kszft" podUID="89747668-1d74-49e4-b34b-55cf8d01980a" May 27 03:33:54.446389 kubelet[2702]: E0527 03:33:54.446348 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:33:54.446529 containerd[1582]: time="2025-05-27T03:33:54.446449897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bf56db757-9rkq6,Uid:ec001429-c4e3-432b-9155-d44bc397d9ca,Namespace:calico-apiserver,Attempt:0,}" May 27 03:33:54.447048 containerd[1582]: time="2025-05-27T03:33:54.446991245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2vc8v,Uid:1bda2375-27ab-4c61-8b67-2615788d423b,Namespace:kube-system,Attempt:0,}" May 27 03:33:54.455877 systemd-networkd[1490]: califacdee651d1: Gained IPv6LL May 27 03:33:54.545675 systemd-networkd[1490]: cali2cf1828541e: Link UP May 27 03:33:54.546245 systemd-networkd[1490]: cali2cf1828541e: Gained carrier May 27 03:33:54.560592 containerd[1582]: 2025-05-27 03:33:54.483 [INFO][4618] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6bf56db757--9rkq6-eth0 calico-apiserver-6bf56db757- calico-apiserver ec001429-c4e3-432b-9155-d44bc397d9ca 814 0 2025-05-27 03:33:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6bf56db757 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6bf56db757-9rkq6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2cf1828541e [] [] }} ContainerID="a9dd8b9a8cb72ab6cfae1bff49cc3f99a381fcc2b6cf710a918f70d8147b3f19" Namespace="calico-apiserver" Pod="calico-apiserver-6bf56db757-9rkq6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bf56db757--9rkq6-" May 27 03:33:54.560592 containerd[1582]: 2025-05-27 03:33:54.483 [INFO][4618] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a9dd8b9a8cb72ab6cfae1bff49cc3f99a381fcc2b6cf710a918f70d8147b3f19" Namespace="calico-apiserver" Pod="calico-apiserver-6bf56db757-9rkq6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bf56db757--9rkq6-eth0" May 27 03:33:54.560592 containerd[1582]: 2025-05-27 03:33:54.511 [INFO][4647] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a9dd8b9a8cb72ab6cfae1bff49cc3f99a381fcc2b6cf710a918f70d8147b3f19" HandleID="k8s-pod-network.a9dd8b9a8cb72ab6cfae1bff49cc3f99a381fcc2b6cf710a918f70d8147b3f19" Workload="localhost-k8s-calico--apiserver--6bf56db757--9rkq6-eth0" May 27 03:33:54.560592 containerd[1582]: 2025-05-27 03:33:54.511 [INFO][4647] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a9dd8b9a8cb72ab6cfae1bff49cc3f99a381fcc2b6cf710a918f70d8147b3f19" HandleID="k8s-pod-network.a9dd8b9a8cb72ab6cfae1bff49cc3f99a381fcc2b6cf710a918f70d8147b3f19" Workload="localhost-k8s-calico--apiserver--6bf56db757--9rkq6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325520), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6bf56db757-9rkq6", "timestamp":"2025-05-27 03:33:54.511230839 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:33:54.560592 containerd[1582]: 2025-05-27 03:33:54.511 [INFO][4647] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:33:54.560592 containerd[1582]: 2025-05-27 03:33:54.511 [INFO][4647] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:33:54.560592 containerd[1582]: 2025-05-27 03:33:54.511 [INFO][4647] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 03:33:54.560592 containerd[1582]: 2025-05-27 03:33:54.517 [INFO][4647] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a9dd8b9a8cb72ab6cfae1bff49cc3f99a381fcc2b6cf710a918f70d8147b3f19" host="localhost" May 27 03:33:54.560592 containerd[1582]: 2025-05-27 03:33:54.520 [INFO][4647] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 03:33:54.560592 containerd[1582]: 2025-05-27 03:33:54.524 [INFO][4647] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 03:33:54.560592 containerd[1582]: 2025-05-27 03:33:54.526 [INFO][4647] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 03:33:54.560592 containerd[1582]: 2025-05-27 03:33:54.527 [INFO][4647] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 03:33:54.560592 containerd[1582]: 2025-05-27 03:33:54.528 [INFO][4647] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a9dd8b9a8cb72ab6cfae1bff49cc3f99a381fcc2b6cf710a918f70d8147b3f19" host="localhost" May 27 03:33:54.560592 containerd[1582]: 2025-05-27 03:33:54.530 [INFO][4647] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a9dd8b9a8cb72ab6cfae1bff49cc3f99a381fcc2b6cf710a918f70d8147b3f19 May 27 03:33:54.560592 containerd[1582]: 2025-05-27 03:33:54.534 [INFO][4647] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a9dd8b9a8cb72ab6cfae1bff49cc3f99a381fcc2b6cf710a918f70d8147b3f19" host="localhost" May 27 03:33:54.560592 containerd[1582]: 2025-05-27 03:33:54.539 [INFO][4647] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.a9dd8b9a8cb72ab6cfae1bff49cc3f99a381fcc2b6cf710a918f70d8147b3f19" host="localhost" May 27 03:33:54.560592 containerd[1582]: 2025-05-27 03:33:54.539 [INFO][4647] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.a9dd8b9a8cb72ab6cfae1bff49cc3f99a381fcc2b6cf710a918f70d8147b3f19" host="localhost" May 27 03:33:54.560592 containerd[1582]: 2025-05-27 03:33:54.539 [INFO][4647] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:33:54.560592 containerd[1582]: 2025-05-27 03:33:54.539 [INFO][4647] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="a9dd8b9a8cb72ab6cfae1bff49cc3f99a381fcc2b6cf710a918f70d8147b3f19" HandleID="k8s-pod-network.a9dd8b9a8cb72ab6cfae1bff49cc3f99a381fcc2b6cf710a918f70d8147b3f19" Workload="localhost-k8s-calico--apiserver--6bf56db757--9rkq6-eth0" May 27 03:33:54.561239 containerd[1582]: 2025-05-27 03:33:54.542 [INFO][4618] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a9dd8b9a8cb72ab6cfae1bff49cc3f99a381fcc2b6cf710a918f70d8147b3f19" Namespace="calico-apiserver" Pod="calico-apiserver-6bf56db757-9rkq6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bf56db757--9rkq6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6bf56db757--9rkq6-eth0", GenerateName:"calico-apiserver-6bf56db757-", Namespace:"calico-apiserver", SelfLink:"", UID:"ec001429-c4e3-432b-9155-d44bc397d9ca", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 33, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bf56db757", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6bf56db757-9rkq6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2cf1828541e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:33:54.561239 containerd[1582]: 2025-05-27 03:33:54.542 [INFO][4618] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="a9dd8b9a8cb72ab6cfae1bff49cc3f99a381fcc2b6cf710a918f70d8147b3f19" Namespace="calico-apiserver" Pod="calico-apiserver-6bf56db757-9rkq6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bf56db757--9rkq6-eth0" May 27 03:33:54.561239 containerd[1582]: 2025-05-27 03:33:54.542 [INFO][4618] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2cf1828541e ContainerID="a9dd8b9a8cb72ab6cfae1bff49cc3f99a381fcc2b6cf710a918f70d8147b3f19" Namespace="calico-apiserver" Pod="calico-apiserver-6bf56db757-9rkq6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bf56db757--9rkq6-eth0" May 27 03:33:54.561239 containerd[1582]: 2025-05-27 03:33:54.546 [INFO][4618] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a9dd8b9a8cb72ab6cfae1bff49cc3f99a381fcc2b6cf710a918f70d8147b3f19" Namespace="calico-apiserver" Pod="calico-apiserver-6bf56db757-9rkq6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bf56db757--9rkq6-eth0" May 27 03:33:54.561239 containerd[1582]: 2025-05-27 03:33:54.547 [INFO][4618] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a9dd8b9a8cb72ab6cfae1bff49cc3f99a381fcc2b6cf710a918f70d8147b3f19" Namespace="calico-apiserver" Pod="calico-apiserver-6bf56db757-9rkq6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bf56db757--9rkq6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6bf56db757--9rkq6-eth0", GenerateName:"calico-apiserver-6bf56db757-", Namespace:"calico-apiserver", SelfLink:"", UID:"ec001429-c4e3-432b-9155-d44bc397d9ca", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 33, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bf56db757", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a9dd8b9a8cb72ab6cfae1bff49cc3f99a381fcc2b6cf710a918f70d8147b3f19", Pod:"calico-apiserver-6bf56db757-9rkq6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2cf1828541e", MAC:"e2:cc:e4:f6:2d:bd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:33:54.561239 containerd[1582]: 2025-05-27 03:33:54.558 [INFO][4618] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a9dd8b9a8cb72ab6cfae1bff49cc3f99a381fcc2b6cf710a918f70d8147b3f19" Namespace="calico-apiserver" Pod="calico-apiserver-6bf56db757-9rkq6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bf56db757--9rkq6-eth0" May 27 03:33:54.581482 containerd[1582]: time="2025-05-27T03:33:54.581397799Z" level=info msg="connecting to shim a9dd8b9a8cb72ab6cfae1bff49cc3f99a381fcc2b6cf710a918f70d8147b3f19" address="unix:///run/containerd/s/16425055d75be688e13a6831713273334413ef026ba648d46841a02eb969e6b1" namespace=k8s.io protocol=ttrpc version=3 May 27 03:33:54.612731 systemd[1]: Started cri-containerd-a9dd8b9a8cb72ab6cfae1bff49cc3f99a381fcc2b6cf710a918f70d8147b3f19.scope - libcontainer container a9dd8b9a8cb72ab6cfae1bff49cc3f99a381fcc2b6cf710a918f70d8147b3f19. May 27 03:33:54.625339 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 03:33:54.655396 systemd-networkd[1490]: calif8355d98e9e: Link UP May 27 03:33:54.656155 systemd-networkd[1490]: calif8355d98e9e: Gained carrier May 27 03:33:54.664010 containerd[1582]: time="2025-05-27T03:33:54.663965469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bf56db757-9rkq6,Uid:ec001429-c4e3-432b-9155-d44bc397d9ca,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a9dd8b9a8cb72ab6cfae1bff49cc3f99a381fcc2b6cf710a918f70d8147b3f19\"" May 27 03:33:54.672554 containerd[1582]: 2025-05-27 03:33:54.488 [INFO][4628] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--2vc8v-eth0 coredns-668d6bf9bc- kube-system 1bda2375-27ab-4c61-8b67-2615788d423b 811 0 2025-05-27 03:33:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-2vc8v eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif8355d98e9e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="66b8a6357c7f06895a7503cc733c19d640ba9e547d8a2468c703512f8ce0926b" Namespace="kube-system" Pod="coredns-668d6bf9bc-2vc8v" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2vc8v-" May 27 03:33:54.672554 containerd[1582]: 2025-05-27 03:33:54.488 [INFO][4628] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="66b8a6357c7f06895a7503cc733c19d640ba9e547d8a2468c703512f8ce0926b" Namespace="kube-system" Pod="coredns-668d6bf9bc-2vc8v" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2vc8v-eth0" May 27 03:33:54.672554 containerd[1582]: 2025-05-27 03:33:54.512 [INFO][4653] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="66b8a6357c7f06895a7503cc733c19d640ba9e547d8a2468c703512f8ce0926b" HandleID="k8s-pod-network.66b8a6357c7f06895a7503cc733c19d640ba9e547d8a2468c703512f8ce0926b" Workload="localhost-k8s-coredns--668d6bf9bc--2vc8v-eth0" May 27 03:33:54.672554 containerd[1582]: 2025-05-27 03:33:54.512 [INFO][4653] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="66b8a6357c7f06895a7503cc733c19d640ba9e547d8a2468c703512f8ce0926b" HandleID="k8s-pod-network.66b8a6357c7f06895a7503cc733c19d640ba9e547d8a2468c703512f8ce0926b" Workload="localhost-k8s-coredns--668d6bf9bc--2vc8v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000536ac0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-2vc8v", "timestamp":"2025-05-27 03:33:54.512222762 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:33:54.672554 containerd[1582]: 2025-05-27 03:33:54.512 [INFO][4653] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:33:54.672554 containerd[1582]: 2025-05-27 03:33:54.539 [INFO][4653] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:33:54.672554 containerd[1582]: 2025-05-27 03:33:54.540 [INFO][4653] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 03:33:54.672554 containerd[1582]: 2025-05-27 03:33:54.618 [INFO][4653] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.66b8a6357c7f06895a7503cc733c19d640ba9e547d8a2468c703512f8ce0926b" host="localhost" May 27 03:33:54.672554 containerd[1582]: 2025-05-27 03:33:54.624 [INFO][4653] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 03:33:54.672554 containerd[1582]: 2025-05-27 03:33:54.628 [INFO][4653] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 03:33:54.672554 containerd[1582]: 2025-05-27 03:33:54.629 [INFO][4653] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 03:33:54.672554 containerd[1582]: 2025-05-27 03:33:54.631 [INFO][4653] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 03:33:54.672554 containerd[1582]: 2025-05-27 03:33:54.631 [INFO][4653] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.66b8a6357c7f06895a7503cc733c19d640ba9e547d8a2468c703512f8ce0926b" host="localhost" May 27 03:33:54.672554 containerd[1582]: 2025-05-27 03:33:54.633 [INFO][4653] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.66b8a6357c7f06895a7503cc733c19d640ba9e547d8a2468c703512f8ce0926b May 27 03:33:54.672554 containerd[1582]: 2025-05-27 03:33:54.636 [INFO][4653] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.66b8a6357c7f06895a7503cc733c19d640ba9e547d8a2468c703512f8ce0926b" host="localhost" May 27 03:33:54.672554 containerd[1582]: 2025-05-27 03:33:54.645 [INFO][4653] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.66b8a6357c7f06895a7503cc733c19d640ba9e547d8a2468c703512f8ce0926b" host="localhost" May 27 03:33:54.672554 containerd[1582]: 2025-05-27 03:33:54.645 [INFO][4653] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.66b8a6357c7f06895a7503cc733c19d640ba9e547d8a2468c703512f8ce0926b" host="localhost" May 27 03:33:54.672554 containerd[1582]: 2025-05-27 03:33:54.645 [INFO][4653] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:33:54.672554 containerd[1582]: 2025-05-27 03:33:54.645 [INFO][4653] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="66b8a6357c7f06895a7503cc733c19d640ba9e547d8a2468c703512f8ce0926b" HandleID="k8s-pod-network.66b8a6357c7f06895a7503cc733c19d640ba9e547d8a2468c703512f8ce0926b" Workload="localhost-k8s-coredns--668d6bf9bc--2vc8v-eth0" May 27 03:33:54.673132 containerd[1582]: 2025-05-27 03:33:54.652 [INFO][4628] cni-plugin/k8s.go 418: Populated endpoint ContainerID="66b8a6357c7f06895a7503cc733c19d640ba9e547d8a2468c703512f8ce0926b" Namespace="kube-system" Pod="coredns-668d6bf9bc-2vc8v" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2vc8v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--2vc8v-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1bda2375-27ab-4c61-8b67-2615788d423b", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 33, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-2vc8v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif8355d98e9e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:33:54.673132 containerd[1582]: 2025-05-27 03:33:54.652 [INFO][4628] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="66b8a6357c7f06895a7503cc733c19d640ba9e547d8a2468c703512f8ce0926b" Namespace="kube-system" Pod="coredns-668d6bf9bc-2vc8v" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2vc8v-eth0" May 27 03:33:54.673132 containerd[1582]: 2025-05-27 03:33:54.652 [INFO][4628] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif8355d98e9e ContainerID="66b8a6357c7f06895a7503cc733c19d640ba9e547d8a2468c703512f8ce0926b" Namespace="kube-system" Pod="coredns-668d6bf9bc-2vc8v" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2vc8v-eth0" May 27 03:33:54.673132 containerd[1582]: 2025-05-27 03:33:54.655 [INFO][4628] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="66b8a6357c7f06895a7503cc733c19d640ba9e547d8a2468c703512f8ce0926b" Namespace="kube-system" Pod="coredns-668d6bf9bc-2vc8v" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2vc8v-eth0" May 27 03:33:54.673132 containerd[1582]: 2025-05-27 03:33:54.657 [INFO][4628] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="66b8a6357c7f06895a7503cc733c19d640ba9e547d8a2468c703512f8ce0926b" Namespace="kube-system" Pod="coredns-668d6bf9bc-2vc8v" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2vc8v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--2vc8v-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1bda2375-27ab-4c61-8b67-2615788d423b", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 33, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"66b8a6357c7f06895a7503cc733c19d640ba9e547d8a2468c703512f8ce0926b", Pod:"coredns-668d6bf9bc-2vc8v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif8355d98e9e", MAC:"1e:76:45:1f:82:10", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:33:54.673132 containerd[1582]: 2025-05-27 03:33:54.667 [INFO][4628] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="66b8a6357c7f06895a7503cc733c19d640ba9e547d8a2468c703512f8ce0926b" Namespace="kube-system" Pod="coredns-668d6bf9bc-2vc8v" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2vc8v-eth0" May 27 03:33:54.693438 containerd[1582]: time="2025-05-27T03:33:54.693385884Z" level=info msg="connecting to shim 66b8a6357c7f06895a7503cc733c19d640ba9e547d8a2468c703512f8ce0926b" address="unix:///run/containerd/s/d0c057e83c29175e349a2b4a13405c42ebdd2c441a1a7bbc27e07da38ffc7f14" namespace=k8s.io protocol=ttrpc version=3 May 27 03:33:54.720743 systemd[1]: Started cri-containerd-66b8a6357c7f06895a7503cc733c19d640ba9e547d8a2468c703512f8ce0926b.scope - libcontainer container 66b8a6357c7f06895a7503cc733c19d640ba9e547d8a2468c703512f8ce0926b. May 27 03:33:54.734579 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 03:33:54.763772 containerd[1582]: time="2025-05-27T03:33:54.763733604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2vc8v,Uid:1bda2375-27ab-4c61-8b67-2615788d423b,Namespace:kube-system,Attempt:0,} returns sandbox id \"66b8a6357c7f06895a7503cc733c19d640ba9e547d8a2468c703512f8ce0926b\"" May 27 03:33:54.764496 kubelet[2702]: E0527 03:33:54.764457 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:33:54.766316 containerd[1582]: time="2025-05-27T03:33:54.766283836Z" level=info msg="CreateContainer within sandbox \"66b8a6357c7f06895a7503cc733c19d640ba9e547d8a2468c703512f8ce0926b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 03:33:54.783437 containerd[1582]: time="2025-05-27T03:33:54.783392149Z" level=info msg="Container be810dfdf17b0c3c6c50bf69065a12f3ea2e6cfae489176dc0b7be05b27d9532: CDI devices from CRI Config.CDIDevices: []" May 27 03:33:54.790958 containerd[1582]: time="2025-05-27T03:33:54.790917568Z" level=info msg="CreateContainer within sandbox \"66b8a6357c7f06895a7503cc733c19d640ba9e547d8a2468c703512f8ce0926b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"be810dfdf17b0c3c6c50bf69065a12f3ea2e6cfae489176dc0b7be05b27d9532\"" May 27 03:33:54.791462 containerd[1582]: time="2025-05-27T03:33:54.791434669Z" level=info msg="StartContainer for \"be810dfdf17b0c3c6c50bf69065a12f3ea2e6cfae489176dc0b7be05b27d9532\"" May 27 03:33:54.792182 containerd[1582]: time="2025-05-27T03:33:54.792157517Z" level=info msg="connecting to shim be810dfdf17b0c3c6c50bf69065a12f3ea2e6cfae489176dc0b7be05b27d9532" address="unix:///run/containerd/s/d0c057e83c29175e349a2b4a13405c42ebdd2c441a1a7bbc27e07da38ffc7f14" protocol=ttrpc version=3 May 27 03:33:54.812767 systemd[1]: Started cri-containerd-be810dfdf17b0c3c6c50bf69065a12f3ea2e6cfae489176dc0b7be05b27d9532.scope - libcontainer container be810dfdf17b0c3c6c50bf69065a12f3ea2e6cfae489176dc0b7be05b27d9532. May 27 03:33:54.846860 containerd[1582]: time="2025-05-27T03:33:54.846821381Z" level=info msg="StartContainer for \"be810dfdf17b0c3c6c50bf69065a12f3ea2e6cfae489176dc0b7be05b27d9532\" returns successfully" May 27 03:33:54.967803 systemd-networkd[1490]: caliae2755f8f23: Gained IPv6LL May 27 03:33:55.085863 kubelet[2702]: E0527 03:33:55.085735 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:33:55.096562 kubelet[2702]: I0527 03:33:55.095922 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-2vc8v" podStartSLOduration=40.095722001 podStartE2EDuration="40.095722001s" podCreationTimestamp="2025-05-27 03:33:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:33:55.094780502 +0000 UTC m=+46.729880362" watchObservedRunningTime="2025-05-27 03:33:55.095722001 +0000 UTC m=+46.730821851" May 27 03:33:55.096035 systemd-networkd[1490]: cali2b2348214b7: Gained IPv6LL May 27 03:33:55.223951 systemd-networkd[1490]: cali400ff2bc95d: Gained IPv6LL May 27 03:33:55.523417 containerd[1582]: time="2025-05-27T03:33:55.523361862Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:33:55.524311 containerd[1582]: time="2025-05-27T03:33:55.524220416Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8758390" May 27 03:33:55.525530 containerd[1582]: time="2025-05-27T03:33:55.525482586Z" level=info msg="ImageCreate event name:\"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:33:55.527657 containerd[1582]: time="2025-05-27T03:33:55.527604263Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:33:55.528072 containerd[1582]: time="2025-05-27T03:33:55.528049619Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"10251093\" in 1.857694197s" May 27 03:33:55.528106 containerd[1582]: time="2025-05-27T03:33:55.528078413Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\"" May 27 03:33:55.529537 containerd[1582]: time="2025-05-27T03:33:55.529495395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 27 03:33:55.529876 containerd[1582]: time="2025-05-27T03:33:55.529837999Z" level=info msg="CreateContainer within sandbox \"5aa9b4aa046745f0babe035cc59d8b3a226d3def1974cbc7942f8d9ee3a736df\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 27 03:33:55.544405 containerd[1582]: time="2025-05-27T03:33:55.544354520Z" level=info msg="Container 95edfd592602bb820a0c8be7e5513c4f90b2e46913634a138f9d80ca2f2cb002: CDI devices from CRI Config.CDIDevices: []" May 27 03:33:55.568348 containerd[1582]: time="2025-05-27T03:33:55.568305895Z" level=info msg="CreateContainer within sandbox \"5aa9b4aa046745f0babe035cc59d8b3a226d3def1974cbc7942f8d9ee3a736df\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"95edfd592602bb820a0c8be7e5513c4f90b2e46913634a138f9d80ca2f2cb002\"" May 27 03:33:55.575384 containerd[1582]: time="2025-05-27T03:33:55.575348446Z" level=info msg="StartContainer for \"95edfd592602bb820a0c8be7e5513c4f90b2e46913634a138f9d80ca2f2cb002\"" May 27 03:33:55.576861 containerd[1582]: time="2025-05-27T03:33:55.576830039Z" level=info msg="connecting to shim 95edfd592602bb820a0c8be7e5513c4f90b2e46913634a138f9d80ca2f2cb002" address="unix:///run/containerd/s/36244e87ef042ad8c07a5ce55f1493b96cfe4d645c26cc38efe944fb55e9d517" protocol=ttrpc version=3 May 27 03:33:55.597743 systemd[1]: Started cri-containerd-95edfd592602bb820a0c8be7e5513c4f90b2e46913634a138f9d80ca2f2cb002.scope - libcontainer container 95edfd592602bb820a0c8be7e5513c4f90b2e46913634a138f9d80ca2f2cb002. May 27 03:33:55.608742 systemd-networkd[1490]: cali2cf1828541e: Gained IPv6LL May 27 03:33:55.642472 containerd[1582]: time="2025-05-27T03:33:55.642431379Z" level=info msg="StartContainer for \"95edfd592602bb820a0c8be7e5513c4f90b2e46913634a138f9d80ca2f2cb002\" returns successfully" May 27 03:33:55.735794 systemd-networkd[1490]: calif8355d98e9e: Gained IPv6LL May 27 03:33:56.089541 kubelet[2702]: E0527 03:33:56.089502 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:33:56.205800 systemd[1]: Started sshd@8-10.0.0.8:22-10.0.0.1:49192.service - OpenSSH per-connection server daemon (10.0.0.1:49192). May 27 03:33:56.261947 sshd[4852]: Accepted publickey for core from 10.0.0.1 port 49192 ssh2: RSA SHA256:28Bggi7Fgl5ol89PGYBtCkx+o5rLsXiIKLwfpE1JZmQ May 27 03:33:56.263417 sshd-session[4852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:33:56.267710 systemd-logind[1563]: New session 9 of user core. May 27 03:33:56.272728 systemd[1]: Started session-9.scope - Session 9 of User core. May 27 03:33:56.405304 sshd[4854]: Connection closed by 10.0.0.1 port 49192 May 27 03:33:56.405533 sshd-session[4852]: pam_unix(sshd:session): session closed for user core May 27 03:33:56.409462 systemd[1]: sshd@8-10.0.0.8:22-10.0.0.1:49192.service: Deactivated successfully. May 27 03:33:56.411589 systemd[1]: session-9.scope: Deactivated successfully. May 27 03:33:56.412515 systemd-logind[1563]: Session 9 logged out. Waiting for processes to exit. May 27 03:33:56.413821 systemd-logind[1563]: Removed session 9. May 27 03:33:57.090971 kubelet[2702]: E0527 03:33:57.090941 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:33:58.647123 containerd[1582]: time="2025-05-27T03:33:58.647078635Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:33:58.647819 containerd[1582]: time="2025-05-27T03:33:58.647791885Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=47252431" May 27 03:33:58.650807 containerd[1582]: time="2025-05-27T03:33:58.650761832Z" level=info msg="ImageCreate event name:\"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:33:58.652885 containerd[1582]: time="2025-05-27T03:33:58.652838232Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:33:58.653461 containerd[1582]: time="2025-05-27T03:33:58.653429302Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 3.123886097s" May 27 03:33:58.653461 containerd[1582]: time="2025-05-27T03:33:58.653457505Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 27 03:33:58.661386 containerd[1582]: time="2025-05-27T03:33:58.661225664Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 27 03:33:58.673366 containerd[1582]: time="2025-05-27T03:33:58.673332953Z" level=info msg="CreateContainer within sandbox \"e810d6d1ae581e6c863252b520db60d50026fdd287c1442da0ed57f9989516c6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 27 03:33:58.681883 containerd[1582]: time="2025-05-27T03:33:58.681845100Z" level=info msg="Container 5773a6ba99dcd0c09d64b5d6b9cfc93b0bcee36aac30504330f3b19e190d4811: CDI devices from CRI Config.CDIDevices: []" May 27 03:33:58.696533 containerd[1582]: time="2025-05-27T03:33:58.696406588Z" level=info msg="CreateContainer within sandbox \"e810d6d1ae581e6c863252b520db60d50026fdd287c1442da0ed57f9989516c6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5773a6ba99dcd0c09d64b5d6b9cfc93b0bcee36aac30504330f3b19e190d4811\"" May 27 03:33:58.699876 containerd[1582]: time="2025-05-27T03:33:58.699689784Z" level=info msg="StartContainer for \"5773a6ba99dcd0c09d64b5d6b9cfc93b0bcee36aac30504330f3b19e190d4811\"" May 27 03:33:58.701027 containerd[1582]: time="2025-05-27T03:33:58.700994094Z" level=info msg="connecting to shim 5773a6ba99dcd0c09d64b5d6b9cfc93b0bcee36aac30504330f3b19e190d4811" address="unix:///run/containerd/s/d064d5f96c67991edb465b0d3a71a567b68112701a141c969cad6b567142bdf6" protocol=ttrpc version=3 May 27 03:33:58.777750 systemd[1]: Started cri-containerd-5773a6ba99dcd0c09d64b5d6b9cfc93b0bcee36aac30504330f3b19e190d4811.scope - libcontainer container 5773a6ba99dcd0c09d64b5d6b9cfc93b0bcee36aac30504330f3b19e190d4811. May 27 03:33:58.837746 containerd[1582]: time="2025-05-27T03:33:58.837693951Z" level=info msg="StartContainer for \"5773a6ba99dcd0c09d64b5d6b9cfc93b0bcee36aac30504330f3b19e190d4811\" returns successfully" May 27 03:33:59.110138 kubelet[2702]: I0527 03:33:59.109168 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6bf56db757-xknc5" podStartSLOduration=30.239825833 podStartE2EDuration="35.109154798s" podCreationTimestamp="2025-05-27 03:33:24 +0000 UTC" firstStartedPulling="2025-05-27 03:33:53.791731219 +0000 UTC m=+45.426831069" lastFinishedPulling="2025-05-27 03:33:58.661060184 +0000 UTC m=+50.296160034" observedRunningTime="2025-05-27 03:33:59.108888076 +0000 UTC m=+50.743987936" watchObservedRunningTime="2025-05-27 03:33:59.109154798 +0000 UTC m=+50.744254658" May 27 03:34:01.421720 systemd[1]: Started sshd@9-10.0.0.8:22-10.0.0.1:49198.service - OpenSSH per-connection server daemon (10.0.0.1:49198). May 27 03:34:01.476310 sshd[4931]: Accepted publickey for core from 10.0.0.1 port 49198 ssh2: RSA SHA256:28Bggi7Fgl5ol89PGYBtCkx+o5rLsXiIKLwfpE1JZmQ May 27 03:34:01.478286 sshd-session[4931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:34:01.484678 systemd-logind[1563]: New session 10 of user core. May 27 03:34:01.492807 systemd[1]: Started session-10.scope - Session 10 of User core. May 27 03:34:01.696399 sshd[4933]: Connection closed by 10.0.0.1 port 49198 May 27 03:34:01.697039 sshd-session[4931]: pam_unix(sshd:session): session closed for user core May 27 03:34:01.701673 systemd[1]: sshd@9-10.0.0.8:22-10.0.0.1:49198.service: Deactivated successfully. May 27 03:34:01.706513 systemd[1]: session-10.scope: Deactivated successfully. May 27 03:34:01.708808 systemd-logind[1563]: Session 10 logged out. Waiting for processes to exit. May 27 03:34:01.711240 systemd-logind[1563]: Removed session 10. May 27 03:34:02.116989 containerd[1582]: time="2025-05-27T03:34:02.116864001Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:34:02.117770 containerd[1582]: time="2025-05-27T03:34:02.117743914Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.0: active requests=0, bytes read=51178512" May 27 03:34:02.119070 containerd[1582]: time="2025-05-27T03:34:02.119025129Z" level=info msg="ImageCreate event name:\"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:34:02.120842 containerd[1582]: time="2025-05-27T03:34:02.120811674Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:34:02.121383 containerd[1582]: time="2025-05-27T03:34:02.121351177Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" with image id \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\", size \"52671183\" in 3.4600974s" May 27 03:34:02.121383 containerd[1582]: time="2025-05-27T03:34:02.121379440Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\"" May 27 03:34:02.122250 containerd[1582]: time="2025-05-27T03:34:02.122231980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 27 03:34:02.128628 containerd[1582]: time="2025-05-27T03:34:02.128330722Z" level=info msg="CreateContainer within sandbox \"e26d49551fcce14d47579719aab9a7057f545df83ea2b4b143af5091760a92ba\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 27 03:34:02.136993 containerd[1582]: time="2025-05-27T03:34:02.136965264Z" level=info msg="Container e7ae45fc000cce40c0f1e6eb3df1ac150553fdec5dc419a4df2d3176b396ff8c: CDI devices from CRI Config.CDIDevices: []" May 27 03:34:02.145239 containerd[1582]: time="2025-05-27T03:34:02.145201427Z" level=info msg="CreateContainer within sandbox \"e26d49551fcce14d47579719aab9a7057f545df83ea2b4b143af5091760a92ba\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e7ae45fc000cce40c0f1e6eb3df1ac150553fdec5dc419a4df2d3176b396ff8c\"" May 27 03:34:02.145659 containerd[1582]: time="2025-05-27T03:34:02.145630493Z" level=info msg="StartContainer for \"e7ae45fc000cce40c0f1e6eb3df1ac150553fdec5dc419a4df2d3176b396ff8c\"" May 27 03:34:02.146812 containerd[1582]: time="2025-05-27T03:34:02.146770433Z" level=info msg="connecting to shim e7ae45fc000cce40c0f1e6eb3df1ac150553fdec5dc419a4df2d3176b396ff8c" address="unix:///run/containerd/s/dacc3e2e00a0ca1d00104d51b15e79a015f7092deeff8b5e1a4203fa19da9845" protocol=ttrpc version=3 May 27 03:34:02.192749 systemd[1]: Started cri-containerd-e7ae45fc000cce40c0f1e6eb3df1ac150553fdec5dc419a4df2d3176b396ff8c.scope - libcontainer container e7ae45fc000cce40c0f1e6eb3df1ac150553fdec5dc419a4df2d3176b396ff8c. May 27 03:34:02.304333 containerd[1582]: time="2025-05-27T03:34:02.304289004Z" level=info msg="StartContainer for \"e7ae45fc000cce40c0f1e6eb3df1ac150553fdec5dc419a4df2d3176b396ff8c\" returns successfully" May 27 03:34:02.981577 containerd[1582]: time="2025-05-27T03:34:02.981511091Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:34:02.982230 containerd[1582]: time="2025-05-27T03:34:02.982194885Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=77" May 27 03:34:02.990130 containerd[1582]: time="2025-05-27T03:34:02.990082034Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 867.765415ms" May 27 03:34:02.990130 containerd[1582]: time="2025-05-27T03:34:02.990110377Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 27 03:34:02.991325 containerd[1582]: time="2025-05-27T03:34:02.991188421Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 27 03:34:02.992186 containerd[1582]: time="2025-05-27T03:34:02.992159775Z" level=info msg="CreateContainer within sandbox \"a9dd8b9a8cb72ab6cfae1bff49cc3f99a381fcc2b6cf710a918f70d8147b3f19\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 27 03:34:03.000247 containerd[1582]: time="2025-05-27T03:34:03.000205021Z" level=info msg="Container a1512f7d5e9a00dc929dc96ac2a0e91148af811e2910c10eb6002384c0591d0d: CDI devices from CRI Config.CDIDevices: []" May 27 03:34:03.008578 containerd[1582]: time="2025-05-27T03:34:03.008539127Z" level=info msg="CreateContainer within sandbox \"a9dd8b9a8cb72ab6cfae1bff49cc3f99a381fcc2b6cf710a918f70d8147b3f19\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a1512f7d5e9a00dc929dc96ac2a0e91148af811e2910c10eb6002384c0591d0d\"" May 27 03:34:03.009132 containerd[1582]: time="2025-05-27T03:34:03.009094520Z" level=info msg="StartContainer for \"a1512f7d5e9a00dc929dc96ac2a0e91148af811e2910c10eb6002384c0591d0d\"" May 27 03:34:03.010229 containerd[1582]: time="2025-05-27T03:34:03.010167314Z" level=info msg="connecting to shim a1512f7d5e9a00dc929dc96ac2a0e91148af811e2910c10eb6002384c0591d0d" address="unix:///run/containerd/s/16425055d75be688e13a6831713273334413ef026ba648d46841a02eb969e6b1" protocol=ttrpc version=3 May 27 03:34:03.041890 systemd[1]: Started cri-containerd-a1512f7d5e9a00dc929dc96ac2a0e91148af811e2910c10eb6002384c0591d0d.scope - libcontainer container a1512f7d5e9a00dc929dc96ac2a0e91148af811e2910c10eb6002384c0591d0d. May 27 03:34:03.172860 containerd[1582]: time="2025-05-27T03:34:03.172120973Z" level=info msg="StartContainer for \"a1512f7d5e9a00dc929dc96ac2a0e91148af811e2910c10eb6002384c0591d0d\" returns successfully" May 27 03:34:03.194554 kubelet[2702]: I0527 03:34:03.194485 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7b65b8b67-vrhgb" podStartSLOduration=27.950461061 podStartE2EDuration="36.194469311s" podCreationTimestamp="2025-05-27 03:33:27 +0000 UTC" firstStartedPulling="2025-05-27 03:33:53.878066796 +0000 UTC m=+45.513166646" lastFinishedPulling="2025-05-27 03:34:02.122075016 +0000 UTC m=+53.757174896" observedRunningTime="2025-05-27 03:34:03.192252629 +0000 UTC m=+54.827352490" watchObservedRunningTime="2025-05-27 03:34:03.194469311 +0000 UTC m=+54.829569161" May 27 03:34:03.207681 kubelet[2702]: I0527 03:34:03.206411 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6bf56db757-9rkq6" podStartSLOduration=30.880521124 podStartE2EDuration="39.206391274s" podCreationTimestamp="2025-05-27 03:33:24 +0000 UTC" firstStartedPulling="2025-05-27 03:33:54.665038906 +0000 UTC m=+46.300138766" lastFinishedPulling="2025-05-27 03:34:02.990909056 +0000 UTC m=+54.626008916" observedRunningTime="2025-05-27 03:34:03.205254661 +0000 UTC m=+54.840354511" watchObservedRunningTime="2025-05-27 03:34:03.206391274 +0000 UTC m=+54.841491165" May 27 03:34:03.243401 containerd[1582]: time="2025-05-27T03:34:03.243274791Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e7ae45fc000cce40c0f1e6eb3df1ac150553fdec5dc419a4df2d3176b396ff8c\" id:\"cd78baa6e54015fcd856a0db45bc74b0b046d21eaa63f95550ad534d7ece1432\" pid:5039 exited_at:{seconds:1748316843 nanos:242498173}" May 27 03:34:04.183305 kubelet[2702]: I0527 03:34:04.183272 2702 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:34:05.718595 containerd[1582]: time="2025-05-27T03:34:05.718519973Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:34:05.719307 containerd[1582]: time="2025-05-27T03:34:05.719249643Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=14705639" May 27 03:34:05.720446 containerd[1582]: time="2025-05-27T03:34:05.720400554Z" level=info msg="ImageCreate event name:\"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:34:05.722452 containerd[1582]: time="2025-05-27T03:34:05.722406539Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:34:05.723016 containerd[1582]: time="2025-05-27T03:34:05.722981318Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"16198294\" in 2.731746951s" May 27 03:34:05.723016 containerd[1582]: time="2025-05-27T03:34:05.723011214Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\"" May 27 03:34:05.724168 containerd[1582]: time="2025-05-27T03:34:05.724117742Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 03:34:05.726206 containerd[1582]: time="2025-05-27T03:34:05.726158884Z" level=info msg="CreateContainer within sandbox \"5aa9b4aa046745f0babe035cc59d8b3a226d3def1974cbc7942f8d9ee3a736df\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 27 03:34:05.736172 containerd[1582]: time="2025-05-27T03:34:05.736132546Z" level=info msg="Container b09b5ab12bdb9ad56c547b89f9703214148d666452e2e12a00b7d8e2a7f10651: CDI devices from CRI Config.CDIDevices: []" May 27 03:34:05.745241 containerd[1582]: time="2025-05-27T03:34:05.745211470Z" level=info msg="CreateContainer within sandbox \"5aa9b4aa046745f0babe035cc59d8b3a226d3def1974cbc7942f8d9ee3a736df\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b09b5ab12bdb9ad56c547b89f9703214148d666452e2e12a00b7d8e2a7f10651\"" May 27 03:34:05.745782 containerd[1582]: time="2025-05-27T03:34:05.745743579Z" level=info msg="StartContainer for \"b09b5ab12bdb9ad56c547b89f9703214148d666452e2e12a00b7d8e2a7f10651\"" May 27 03:34:05.747128 containerd[1582]: time="2025-05-27T03:34:05.747104534Z" level=info msg="connecting to shim b09b5ab12bdb9ad56c547b89f9703214148d666452e2e12a00b7d8e2a7f10651" address="unix:///run/containerd/s/36244e87ef042ad8c07a5ce55f1493b96cfe4d645c26cc38efe944fb55e9d517" protocol=ttrpc version=3 May 27 03:34:05.767743 systemd[1]: Started cri-containerd-b09b5ab12bdb9ad56c547b89f9703214148d666452e2e12a00b7d8e2a7f10651.scope - libcontainer container b09b5ab12bdb9ad56c547b89f9703214148d666452e2e12a00b7d8e2a7f10651. May 27 03:34:05.809891 containerd[1582]: time="2025-05-27T03:34:05.809841566Z" level=info msg="StartContainer for \"b09b5ab12bdb9ad56c547b89f9703214148d666452e2e12a00b7d8e2a7f10651\" returns successfully" May 27 03:34:06.012318 containerd[1582]: time="2025-05-27T03:34:06.012203176Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:34:06.013702 containerd[1582]: time="2025-05-27T03:34:06.013661884Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:34:06.013765 containerd[1582]: time="2025-05-27T03:34:06.013732477Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 03:34:06.013969 kubelet[2702]: E0527 03:34:06.013907 2702 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:34:06.014325 kubelet[2702]: E0527 03:34:06.013966 2702 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:34:06.017959 kubelet[2702]: E0527 03:34:06.017898 2702 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d7672fcfaa5b47a6ad41dd3d87124757,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fhgq4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f857bbd57-7clns_calico-system(3ede4255-edc2-43ea-a2e0-613a76641b48): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:34:06.019971 containerd[1582]: time="2025-05-27T03:34:06.019932134Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 03:34:06.251423 containerd[1582]: time="2025-05-27T03:34:06.251358665Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:34:06.481112 containerd[1582]: time="2025-05-27T03:34:06.480696795Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:34:06.481341 containerd[1582]: time="2025-05-27T03:34:06.480773559Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 03:34:06.482129 kubelet[2702]: E0527 03:34:06.482079 2702 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:34:06.482267 kubelet[2702]: E0527 03:34:06.482138 2702 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:34:06.482267 kubelet[2702]: E0527 03:34:06.482242 2702 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fhgq4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f857bbd57-7clns_calico-system(3ede4255-edc2-43ea-a2e0-613a76641b48): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:34:06.483832 kubelet[2702]: E0527 03:34:06.483785 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-5f857bbd57-7clns" podUID="3ede4255-edc2-43ea-a2e0-613a76641b48" May 27 03:34:06.487533 kubelet[2702]: I0527 03:34:06.487460 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-xkksp" podStartSLOduration=27.433641054 podStartE2EDuration="39.487448529s" podCreationTimestamp="2025-05-27 03:33:27 +0000 UTC" firstStartedPulling="2025-05-27 03:33:53.670094301 +0000 UTC m=+45.305194161" lastFinishedPulling="2025-05-27 03:34:05.723901776 +0000 UTC m=+57.359001636" observedRunningTime="2025-05-27 03:34:06.485434799 +0000 UTC m=+58.120534689" watchObservedRunningTime="2025-05-27 03:34:06.487448529 +0000 UTC m=+58.122548389" May 27 03:34:06.502282 kubelet[2702]: I0527 03:34:06.502237 2702 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 27 03:34:06.502282 kubelet[2702]: I0527 03:34:06.502276 2702 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 27 03:34:06.714752 systemd[1]: Started sshd@10-10.0.0.8:22-10.0.0.1:43146.service - OpenSSH per-connection server daemon (10.0.0.1:43146). May 27 03:34:06.778809 sshd[5093]: Accepted publickey for core from 10.0.0.1 port 43146 ssh2: RSA SHA256:28Bggi7Fgl5ol89PGYBtCkx+o5rLsXiIKLwfpE1JZmQ May 27 03:34:06.780198 sshd-session[5093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:34:06.784559 systemd-logind[1563]: New session 11 of user core. May 27 03:34:06.791745 systemd[1]: Started session-11.scope - Session 11 of User core. May 27 03:34:06.915113 sshd[5096]: Connection closed by 10.0.0.1 port 43146 May 27 03:34:06.915386 sshd-session[5093]: pam_unix(sshd:session): session closed for user core May 27 03:34:06.926388 systemd[1]: sshd@10-10.0.0.8:22-10.0.0.1:43146.service: Deactivated successfully. May 27 03:34:06.928316 systemd[1]: session-11.scope: Deactivated successfully. May 27 03:34:06.929330 systemd-logind[1563]: Session 11 logged out. Waiting for processes to exit. May 27 03:34:06.932308 systemd[1]: Started sshd@11-10.0.0.8:22-10.0.0.1:43162.service - OpenSSH per-connection server daemon (10.0.0.1:43162). May 27 03:34:06.933879 systemd-logind[1563]: Removed session 11. May 27 03:34:06.973833 sshd[5111]: Accepted publickey for core from 10.0.0.1 port 43162 ssh2: RSA SHA256:28Bggi7Fgl5ol89PGYBtCkx+o5rLsXiIKLwfpE1JZmQ May 27 03:34:06.975127 sshd-session[5111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:34:06.979359 systemd-logind[1563]: New session 12 of user core. May 27 03:34:06.988726 systemd[1]: Started session-12.scope - Session 12 of User core. May 27 03:34:07.461972 sshd[5113]: Connection closed by 10.0.0.1 port 43162 May 27 03:34:07.462266 sshd-session[5111]: pam_unix(sshd:session): session closed for user core May 27 03:34:07.476305 systemd[1]: sshd@11-10.0.0.8:22-10.0.0.1:43162.service: Deactivated successfully. May 27 03:34:07.479978 systemd[1]: session-12.scope: Deactivated successfully. May 27 03:34:07.481703 systemd-logind[1563]: Session 12 logged out. Waiting for processes to exit. May 27 03:34:07.485844 systemd[1]: Started sshd@12-10.0.0.8:22-10.0.0.1:43178.service - OpenSSH per-connection server daemon (10.0.0.1:43178). May 27 03:34:07.487214 systemd-logind[1563]: Removed session 12. May 27 03:34:07.528489 sshd[5124]: Accepted publickey for core from 10.0.0.1 port 43178 ssh2: RSA SHA256:28Bggi7Fgl5ol89PGYBtCkx+o5rLsXiIKLwfpE1JZmQ May 27 03:34:07.529905 sshd-session[5124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:34:07.534553 systemd-logind[1563]: New session 13 of user core. May 27 03:34:07.544739 systemd[1]: Started session-13.scope - Session 13 of User core. May 27 03:34:07.655083 sshd[5126]: Connection closed by 10.0.0.1 port 43178 May 27 03:34:07.655470 sshd-session[5124]: pam_unix(sshd:session): session closed for user core May 27 03:34:07.659943 systemd[1]: sshd@12-10.0.0.8:22-10.0.0.1:43178.service: Deactivated successfully. May 27 03:34:07.662011 systemd[1]: session-13.scope: Deactivated successfully. May 27 03:34:07.662738 systemd-logind[1563]: Session 13 logged out. Waiting for processes to exit. May 27 03:34:07.663895 systemd-logind[1563]: Removed session 13. May 27 03:34:09.447207 containerd[1582]: time="2025-05-27T03:34:09.447106687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 03:34:09.680369 containerd[1582]: time="2025-05-27T03:34:09.680312358Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:34:09.681364 containerd[1582]: time="2025-05-27T03:34:09.681325730Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:34:09.681475 containerd[1582]: time="2025-05-27T03:34:09.681407844Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 03:34:09.681627 kubelet[2702]: E0527 03:34:09.681557 2702 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:34:09.682050 kubelet[2702]: E0527 03:34:09.681631 2702 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:34:09.682050 kubelet[2702]: E0527 03:34:09.681818 2702 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vwszz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-kszft_calico-system(89747668-1d74-49e4-b34b-55cf8d01980a): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:34:09.683077 kubelet[2702]: E0527 03:34:09.683017 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-kszft" podUID="89747668-1d74-49e4-b34b-55cf8d01980a" May 27 03:34:12.667433 systemd[1]: Started sshd@13-10.0.0.8:22-10.0.0.1:43188.service - OpenSSH per-connection server daemon (10.0.0.1:43188). May 27 03:34:12.721065 sshd[5154]: Accepted publickey for core from 10.0.0.1 port 43188 ssh2: RSA SHA256:28Bggi7Fgl5ol89PGYBtCkx+o5rLsXiIKLwfpE1JZmQ May 27 03:34:12.722467 sshd-session[5154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:34:12.726723 systemd-logind[1563]: New session 14 of user core. May 27 03:34:12.739751 systemd[1]: Started session-14.scope - Session 14 of User core. May 27 03:34:12.853071 sshd[5156]: Connection closed by 10.0.0.1 port 43188 May 27 03:34:12.853358 sshd-session[5154]: pam_unix(sshd:session): session closed for user core May 27 03:34:12.857772 systemd[1]: sshd@13-10.0.0.8:22-10.0.0.1:43188.service: Deactivated successfully. May 27 03:34:12.860279 systemd[1]: session-14.scope: Deactivated successfully. May 27 03:34:12.861286 systemd-logind[1563]: Session 14 logged out. Waiting for processes to exit. May 27 03:34:12.863305 systemd-logind[1563]: Removed session 14. May 27 03:34:17.446438 kubelet[2702]: E0527 03:34:17.446402 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:34:17.871391 systemd[1]: Started sshd@14-10.0.0.8:22-10.0.0.1:35528.service - OpenSSH per-connection server daemon (10.0.0.1:35528). May 27 03:34:17.922909 sshd[5173]: Accepted publickey for core from 10.0.0.1 port 35528 ssh2: RSA SHA256:28Bggi7Fgl5ol89PGYBtCkx+o5rLsXiIKLwfpE1JZmQ May 27 03:34:17.924252 sshd-session[5173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:34:17.928137 systemd-logind[1563]: New session 15 of user core. May 27 03:34:17.937742 systemd[1]: Started session-15.scope - Session 15 of User core. May 27 03:34:18.081682 sshd[5175]: Connection closed by 10.0.0.1 port 35528 May 27 03:34:18.081949 sshd-session[5173]: pam_unix(sshd:session): session closed for user core May 27 03:34:18.085578 systemd[1]: sshd@14-10.0.0.8:22-10.0.0.1:35528.service: Deactivated successfully. May 27 03:34:18.087540 systemd[1]: session-15.scope: Deactivated successfully. May 27 03:34:18.088278 systemd-logind[1563]: Session 15 logged out. Waiting for processes to exit. May 27 03:34:18.089382 systemd-logind[1563]: Removed session 15. May 27 03:34:18.452128 kubelet[2702]: E0527 03:34:18.452064 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-5f857bbd57-7clns" podUID="3ede4255-edc2-43ea-a2e0-613a76641b48" May 27 03:34:19.142149 containerd[1582]: time="2025-05-27T03:34:19.142070562Z" level=info msg="TaskExit event in podsandbox handler container_id:\"28736f24428bcf1f7028055e51dd1e6071e536d4a6a9abc7d6781c84f5dc3ece\" id:\"1c237cf4f65b2c6ad79a66ef6fc44fd63e7b1dd0974ab2e00576143463c69d84\" pid:5199 exited_at:{seconds:1748316859 nanos:141756988}" May 27 03:34:22.447628 kubelet[2702]: E0527 03:34:22.447540 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-kszft" podUID="89747668-1d74-49e4-b34b-55cf8d01980a" May 27 03:34:23.096663 systemd[1]: Started sshd@15-10.0.0.8:22-10.0.0.1:35544.service - OpenSSH per-connection server daemon (10.0.0.1:35544). May 27 03:34:23.152869 sshd[5213]: Accepted publickey for core from 10.0.0.1 port 35544 ssh2: RSA SHA256:28Bggi7Fgl5ol89PGYBtCkx+o5rLsXiIKLwfpE1JZmQ May 27 03:34:23.154439 sshd-session[5213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:34:23.158879 systemd-logind[1563]: New session 16 of user core. May 27 03:34:23.166734 systemd[1]: Started session-16.scope - Session 16 of User core. May 27 03:34:23.290666 sshd[5215]: Connection closed by 10.0.0.1 port 35544 May 27 03:34:23.290974 sshd-session[5213]: pam_unix(sshd:session): session closed for user core May 27 03:34:23.295436 systemd[1]: sshd@15-10.0.0.8:22-10.0.0.1:35544.service: Deactivated successfully. May 27 03:34:23.297592 systemd[1]: session-16.scope: Deactivated successfully. May 27 03:34:23.298312 systemd-logind[1563]: Session 16 logged out. Waiting for processes to exit. May 27 03:34:23.299629 systemd-logind[1563]: Removed session 16. May 27 03:34:24.446481 kubelet[2702]: E0527 03:34:24.446444 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:34:28.302735 systemd[1]: Started sshd@16-10.0.0.8:22-10.0.0.1:38470.service - OpenSSH per-connection server daemon (10.0.0.1:38470). May 27 03:34:28.359634 sshd[5231]: Accepted publickey for core from 10.0.0.1 port 38470 ssh2: RSA SHA256:28Bggi7Fgl5ol89PGYBtCkx+o5rLsXiIKLwfpE1JZmQ May 27 03:34:28.361688 sshd-session[5231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:34:28.367846 systemd-logind[1563]: New session 17 of user core. May 27 03:34:28.377821 systemd[1]: Started session-17.scope - Session 17 of User core. May 27 03:34:28.542131 sshd[5233]: Connection closed by 10.0.0.1 port 38470 May 27 03:34:28.542484 sshd-session[5231]: pam_unix(sshd:session): session closed for user core May 27 03:34:28.546428 systemd[1]: sshd@16-10.0.0.8:22-10.0.0.1:38470.service: Deactivated successfully. May 27 03:34:28.548280 systemd[1]: session-17.scope: Deactivated successfully. May 27 03:34:28.549306 systemd-logind[1563]: Session 17 logged out. Waiting for processes to exit. May 27 03:34:28.550445 systemd-logind[1563]: Removed session 17. May 27 03:34:29.446855 containerd[1582]: time="2025-05-27T03:34:29.446574298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 03:34:29.691410 containerd[1582]: time="2025-05-27T03:34:29.691335555Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:34:29.735773 containerd[1582]: time="2025-05-27T03:34:29.735651791Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:34:29.735773 containerd[1582]: time="2025-05-27T03:34:29.735723609Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 03:34:29.735899 kubelet[2702]: E0527 03:34:29.735867 2702 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:34:29.736353 kubelet[2702]: E0527 03:34:29.735917 2702 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:34:29.736353 kubelet[2702]: E0527 03:34:29.736020 2702 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d7672fcfaa5b47a6ad41dd3d87124757,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fhgq4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f857bbd57-7clns_calico-system(3ede4255-edc2-43ea-a2e0-613a76641b48): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:34:29.738076 containerd[1582]: time="2025-05-27T03:34:29.738040130Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 03:34:30.077798 containerd[1582]: time="2025-05-27T03:34:30.077668550Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:34:30.157932 containerd[1582]: time="2025-05-27T03:34:30.157869408Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:34:30.158082 containerd[1582]: time="2025-05-27T03:34:30.157883644Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 03:34:30.158191 kubelet[2702]: E0527 03:34:30.158152 2702 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:34:30.158285 kubelet[2702]: E0527 03:34:30.158199 2702 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:34:30.158335 kubelet[2702]: E0527 03:34:30.158291 2702 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fhgq4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f857bbd57-7clns_calico-system(3ede4255-edc2-43ea-a2e0-613a76641b48): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:34:30.159924 kubelet[2702]: E0527 03:34:30.159881 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-5f857bbd57-7clns" podUID="3ede4255-edc2-43ea-a2e0-613a76641b48" May 27 03:34:33.222973 containerd[1582]: time="2025-05-27T03:34:33.222922060Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e7ae45fc000cce40c0f1e6eb3df1ac150553fdec5dc419a4df2d3176b396ff8c\" id:\"5f60ad095e019a8aadbec0a365c4042a3d8eb7ff171551994337b92eed9f32f1\" pid:5264 exited_at:{seconds:1748316873 nanos:222719303}" May 27 03:34:33.558374 systemd[1]: Started sshd@17-10.0.0.8:22-10.0.0.1:55510.service - OpenSSH per-connection server daemon (10.0.0.1:55510). May 27 03:34:33.594533 sshd[5276]: Accepted publickey for core from 10.0.0.1 port 55510 ssh2: RSA SHA256:28Bggi7Fgl5ol89PGYBtCkx+o5rLsXiIKLwfpE1JZmQ May 27 03:34:33.596126 sshd-session[5276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:34:33.600758 systemd-logind[1563]: New session 18 of user core. May 27 03:34:33.612748 systemd[1]: Started session-18.scope - Session 18 of User core. May 27 03:34:33.723077 sshd[5278]: Connection closed by 10.0.0.1 port 55510 May 27 03:34:33.723390 sshd-session[5276]: pam_unix(sshd:session): session closed for user core May 27 03:34:33.735131 systemd[1]: sshd@17-10.0.0.8:22-10.0.0.1:55510.service: Deactivated successfully. May 27 03:34:33.736793 systemd[1]: session-18.scope: Deactivated successfully. May 27 03:34:33.737546 systemd-logind[1563]: Session 18 logged out. Waiting for processes to exit. May 27 03:34:33.740465 systemd[1]: Started sshd@18-10.0.0.8:22-10.0.0.1:55514.service - OpenSSH per-connection server daemon (10.0.0.1:55514). May 27 03:34:33.741076 systemd-logind[1563]: Removed session 18. May 27 03:34:33.793343 sshd[5291]: Accepted publickey for core from 10.0.0.1 port 55514 ssh2: RSA SHA256:28Bggi7Fgl5ol89PGYBtCkx+o5rLsXiIKLwfpE1JZmQ May 27 03:34:33.794740 sshd-session[5291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:34:33.798915 systemd-logind[1563]: New session 19 of user core. May 27 03:34:33.809728 systemd[1]: Started session-19.scope - Session 19 of User core. May 27 03:34:33.994698 sshd[5293]: Connection closed by 10.0.0.1 port 55514 May 27 03:34:33.995110 sshd-session[5291]: pam_unix(sshd:session): session closed for user core May 27 03:34:34.006177 systemd[1]: sshd@18-10.0.0.8:22-10.0.0.1:55514.service: Deactivated successfully. May 27 03:34:34.007852 systemd[1]: session-19.scope: Deactivated successfully. May 27 03:34:34.008620 systemd-logind[1563]: Session 19 logged out. Waiting for processes to exit. May 27 03:34:34.011053 systemd[1]: Started sshd@19-10.0.0.8:22-10.0.0.1:55530.service - OpenSSH per-connection server daemon (10.0.0.1:55530). May 27 03:34:34.011638 systemd-logind[1563]: Removed session 19. May 27 03:34:34.072912 sshd[5307]: Accepted publickey for core from 10.0.0.1 port 55530 ssh2: RSA SHA256:28Bggi7Fgl5ol89PGYBtCkx+o5rLsXiIKLwfpE1JZmQ May 27 03:34:34.074213 sshd-session[5307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:34:34.078286 systemd-logind[1563]: New session 20 of user core. May 27 03:34:34.085722 systemd[1]: Started session-20.scope - Session 20 of User core. May 27 03:34:34.456809 kubelet[2702]: E0527 03:34:34.456779 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:34:35.074487 sshd[5309]: Connection closed by 10.0.0.1 port 55530 May 27 03:34:35.074974 sshd-session[5307]: pam_unix(sshd:session): session closed for user core May 27 03:34:35.083406 systemd[1]: sshd@19-10.0.0.8:22-10.0.0.1:55530.service: Deactivated successfully. May 27 03:34:35.085321 systemd[1]: session-20.scope: Deactivated successfully. May 27 03:34:35.086315 systemd-logind[1563]: Session 20 logged out. Waiting for processes to exit. May 27 03:34:35.089112 systemd[1]: Started sshd@20-10.0.0.8:22-10.0.0.1:55544.service - OpenSSH per-connection server daemon (10.0.0.1:55544). May 27 03:34:35.089758 systemd-logind[1563]: Removed session 20. May 27 03:34:35.135094 sshd[5331]: Accepted publickey for core from 10.0.0.1 port 55544 ssh2: RSA SHA256:28Bggi7Fgl5ol89PGYBtCkx+o5rLsXiIKLwfpE1JZmQ May 27 03:34:35.136587 sshd-session[5331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:34:35.140973 systemd-logind[1563]: New session 21 of user core. May 27 03:34:35.156835 systemd[1]: Started session-21.scope - Session 21 of User core. May 27 03:34:35.398025 sshd[5333]: Connection closed by 10.0.0.1 port 55544 May 27 03:34:35.398539 sshd-session[5331]: pam_unix(sshd:session): session closed for user core May 27 03:34:35.411488 systemd[1]: sshd@20-10.0.0.8:22-10.0.0.1:55544.service: Deactivated successfully. May 27 03:34:35.413894 systemd[1]: session-21.scope: Deactivated successfully. May 27 03:34:35.414903 systemd-logind[1563]: Session 21 logged out. Waiting for processes to exit. May 27 03:34:35.419638 systemd[1]: Started sshd@21-10.0.0.8:22-10.0.0.1:55560.service - OpenSSH per-connection server daemon (10.0.0.1:55560). May 27 03:34:35.420322 systemd-logind[1563]: Removed session 21. May 27 03:34:35.467184 sshd[5344]: Accepted publickey for core from 10.0.0.1 port 55560 ssh2: RSA SHA256:28Bggi7Fgl5ol89PGYBtCkx+o5rLsXiIKLwfpE1JZmQ May 27 03:34:35.468787 sshd-session[5344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:34:35.473471 systemd-logind[1563]: New session 22 of user core. May 27 03:34:35.490743 systemd[1]: Started session-22.scope - Session 22 of User core. May 27 03:34:35.606796 sshd[5346]: Connection closed by 10.0.0.1 port 55560 May 27 03:34:35.607146 sshd-session[5344]: pam_unix(sshd:session): session closed for user core May 27 03:34:35.612203 systemd[1]: sshd@21-10.0.0.8:22-10.0.0.1:55560.service: Deactivated successfully. May 27 03:34:35.614690 systemd[1]: session-22.scope: Deactivated successfully. May 27 03:34:35.615535 systemd-logind[1563]: Session 22 logged out. Waiting for processes to exit. May 27 03:34:35.616863 systemd-logind[1563]: Removed session 22. May 27 03:34:35.802322 kubelet[2702]: I0527 03:34:35.802277 2702 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:34:36.448802 containerd[1582]: time="2025-05-27T03:34:36.448752342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 03:34:36.670496 containerd[1582]: time="2025-05-27T03:34:36.670446209Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:34:36.671457 containerd[1582]: time="2025-05-27T03:34:36.671425536Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:34:36.671588 containerd[1582]: time="2025-05-27T03:34:36.671498064Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 03:34:36.671720 kubelet[2702]: E0527 03:34:36.671673 2702 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:34:36.671779 kubelet[2702]: E0527 03:34:36.671727 2702 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:34:36.671894 kubelet[2702]: E0527 03:34:36.671836 2702 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vwszz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-kszft_calico-system(89747668-1d74-49e4-b34b-55cf8d01980a): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:34:36.673139 kubelet[2702]: E0527 03:34:36.673060 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-kszft" podUID="89747668-1d74-49e4-b34b-55cf8d01980a" May 27 03:34:40.623471 systemd[1]: Started sshd@22-10.0.0.8:22-10.0.0.1:55568.service - OpenSSH per-connection server daemon (10.0.0.1:55568). May 27 03:34:40.679933 sshd[5361]: Accepted publickey for core from 10.0.0.1 port 55568 ssh2: RSA SHA256:28Bggi7Fgl5ol89PGYBtCkx+o5rLsXiIKLwfpE1JZmQ May 27 03:34:40.681392 sshd-session[5361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:34:40.685391 systemd-logind[1563]: New session 23 of user core. May 27 03:34:40.694752 systemd[1]: Started session-23.scope - Session 23 of User core. May 27 03:34:40.810020 sshd[5363]: Connection closed by 10.0.0.1 port 55568 May 27 03:34:40.810354 sshd-session[5361]: pam_unix(sshd:session): session closed for user core May 27 03:34:40.813332 systemd[1]: sshd@22-10.0.0.8:22-10.0.0.1:55568.service: Deactivated successfully. May 27 03:34:40.815651 systemd[1]: session-23.scope: Deactivated successfully. May 27 03:34:40.817352 systemd-logind[1563]: Session 23 logged out. Waiting for processes to exit. May 27 03:34:40.818760 systemd-logind[1563]: Removed session 23. May 27 03:34:43.446827 kubelet[2702]: E0527 03:34:43.446762 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-5f857bbd57-7clns" podUID="3ede4255-edc2-43ea-a2e0-613a76641b48" May 27 03:34:45.446389 kubelet[2702]: E0527 03:34:45.446361 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:34:45.827484 systemd[1]: Started sshd@23-10.0.0.8:22-10.0.0.1:52588.service - OpenSSH per-connection server daemon (10.0.0.1:52588). May 27 03:34:45.872183 sshd[5378]: Accepted publickey for core from 10.0.0.1 port 52588 ssh2: RSA SHA256:28Bggi7Fgl5ol89PGYBtCkx+o5rLsXiIKLwfpE1JZmQ May 27 03:34:45.873670 sshd-session[5378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:34:45.877676 systemd-logind[1563]: New session 24 of user core. May 27 03:34:45.884773 systemd[1]: Started session-24.scope - Session 24 of User core. May 27 03:34:45.988531 sshd[5380]: Connection closed by 10.0.0.1 port 52588 May 27 03:34:45.988841 sshd-session[5378]: pam_unix(sshd:session): session closed for user core May 27 03:34:45.993211 systemd[1]: sshd@23-10.0.0.8:22-10.0.0.1:52588.service: Deactivated successfully. May 27 03:34:45.995050 systemd[1]: session-24.scope: Deactivated successfully. May 27 03:34:45.995791 systemd-logind[1563]: Session 24 logged out. Waiting for processes to exit. May 27 03:34:45.996933 systemd-logind[1563]: Removed session 24. May 27 03:34:46.433951 containerd[1582]: time="2025-05-27T03:34:46.433907653Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e7ae45fc000cce40c0f1e6eb3df1ac150553fdec5dc419a4df2d3176b396ff8c\" id:\"ca93cf90d63fe8a4369bdb4501a89e60775c5fa93d9d4d7f04aee277bd4d465f\" pid:5406 exited_at:{seconds:1748316886 nanos:433718103}" May 27 03:34:48.447331 kubelet[2702]: E0527 03:34:48.447255 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-kszft" podUID="89747668-1d74-49e4-b34b-55cf8d01980a" May 27 03:34:49.133650 containerd[1582]: time="2025-05-27T03:34:49.133496405Z" level=info msg="TaskExit event in podsandbox handler container_id:\"28736f24428bcf1f7028055e51dd1e6071e536d4a6a9abc7d6781c84f5dc3ece\" id:\"8c54c2604ad5d5becd449a909bf3b0e990b31f351a5ebbf0abc2ff0bc1d83ec3\" pid:5428 exited_at:{seconds:1748316889 nanos:132928307}" May 27 03:34:49.188151 update_engine[1564]: I20250527 03:34:49.188093 1564 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 27 03:34:49.188151 update_engine[1564]: I20250527 03:34:49.188148 1564 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 27 03:34:49.188541 update_engine[1564]: I20250527 03:34:49.188519 1564 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 27 03:34:49.189158 update_engine[1564]: I20250527 03:34:49.189131 1564 omaha_request_params.cc:62] Current group set to alpha May 27 03:34:49.189312 update_engine[1564]: I20250527 03:34:49.189288 1564 update_attempter.cc:499] Already updated boot flags. Skipping. May 27 03:34:49.189312 update_engine[1564]: I20250527 03:34:49.189301 1564 update_attempter.cc:643] Scheduling an action processor start. May 27 03:34:49.189358 update_engine[1564]: I20250527 03:34:49.189328 1564 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 27 03:34:49.189378 update_engine[1564]: I20250527 03:34:49.189368 1564 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 27 03:34:49.189449 update_engine[1564]: I20250527 03:34:49.189425 1564 omaha_request_action.cc:271] Posting an Omaha request to disabled May 27 03:34:49.189449 update_engine[1564]: I20250527 03:34:49.189438 1564 omaha_request_action.cc:272] Request: May 27 03:34:49.189449 update_engine[1564]: May 27 03:34:49.189449 update_engine[1564]: May 27 03:34:49.189449 update_engine[1564]: May 27 03:34:49.189449 update_engine[1564]: May 27 03:34:49.189449 update_engine[1564]: May 27 03:34:49.189449 update_engine[1564]: May 27 03:34:49.189449 update_engine[1564]: May 27 03:34:49.189449 update_engine[1564]: May 27 03:34:49.189449 update_engine[1564]: I20250527 03:34:49.189445 1564 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 27 03:34:49.195689 locksmithd[1608]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 27 03:34:49.196945 update_engine[1564]: I20250527 03:34:49.196904 1564 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 27 03:34:49.197307 update_engine[1564]: I20250527 03:34:49.197244 1564 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 27 03:34:49.204349 update_engine[1564]: E20250527 03:34:49.204314 1564 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 27 03:34:49.204394 update_engine[1564]: I20250527 03:34:49.204366 1564 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 27 03:34:51.005676 systemd[1]: Started sshd@24-10.0.0.8:22-10.0.0.1:52592.service - OpenSSH per-connection server daemon (10.0.0.1:52592). May 27 03:34:51.071267 sshd[5441]: Accepted publickey for core from 10.0.0.1 port 52592 ssh2: RSA SHA256:28Bggi7Fgl5ol89PGYBtCkx+o5rLsXiIKLwfpE1JZmQ May 27 03:34:51.073108 sshd-session[5441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:34:51.078575 systemd-logind[1563]: New session 25 of user core. May 27 03:34:51.088736 systemd[1]: Started session-25.scope - Session 25 of User core. May 27 03:34:51.251943 sshd[5443]: Connection closed by 10.0.0.1 port 52592 May 27 03:34:51.252250 sshd-session[5441]: pam_unix(sshd:session): session closed for user core May 27 03:34:51.256714 systemd[1]: sshd@24-10.0.0.8:22-10.0.0.1:52592.service: Deactivated successfully. May 27 03:34:51.258830 systemd[1]: session-25.scope: Deactivated successfully. May 27 03:34:51.259552 systemd-logind[1563]: Session 25 logged out. Waiting for processes to exit. May 27 03:34:51.261500 systemd-logind[1563]: Removed session 25. May 27 03:34:54.446178 kubelet[2702]: E0527 03:34:54.446139 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:34:56.264785 systemd[1]: Started sshd@25-10.0.0.8:22-10.0.0.1:44448.service - OpenSSH per-connection server daemon (10.0.0.1:44448). May 27 03:34:56.307658 sshd[5457]: Accepted publickey for core from 10.0.0.1 port 44448 ssh2: RSA SHA256:28Bggi7Fgl5ol89PGYBtCkx+o5rLsXiIKLwfpE1JZmQ May 27 03:34:56.309272 sshd-session[5457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:34:56.313685 systemd-logind[1563]: New session 26 of user core. May 27 03:34:56.321737 systemd[1]: Started session-26.scope - Session 26 of User core. May 27 03:34:56.455156 sshd[5459]: Connection closed by 10.0.0.1 port 44448 May 27 03:34:56.455432 sshd-session[5457]: pam_unix(sshd:session): session closed for user core May 27 03:34:56.459817 systemd[1]: sshd@25-10.0.0.8:22-10.0.0.1:44448.service: Deactivated successfully. May 27 03:34:56.461925 systemd[1]: session-26.scope: Deactivated successfully. May 27 03:34:56.462674 systemd-logind[1563]: Session 26 logged out. Waiting for processes to exit. May 27 03:34:56.464002 systemd-logind[1563]: Removed session 26.