May 27 03:54:40.853025 kernel: Linux version 6.12.30-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 27 01:09:43 -00 2025 May 27 03:54:40.853046 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=f6c186658a19d5a08471ef76df75f82494b37b46908f9237b2c3cf497da860c6 May 27 03:54:40.853055 kernel: BIOS-provided physical RAM map: May 27 03:54:40.853063 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable May 27 03:54:40.853069 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved May 27 03:54:40.853074 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 27 03:54:40.853096 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable May 27 03:54:40.853103 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved May 27 03:54:40.853108 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 27 03:54:40.853114 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 27 03:54:40.853120 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 27 03:54:40.853125 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 27 03:54:40.853133 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable May 27 03:54:40.853139 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 27 03:54:40.853145 kernel: NX (Execute Disable) protection: active May 27 03:54:40.853151 kernel: APIC: Static calls initialized May 27 03:54:40.853157 kernel: SMBIOS 2.8 present. May 27 03:54:40.853165 kernel: DMI: Linode Compute Instance, BIOS Not Specified May 27 03:54:40.853171 kernel: DMI: Memory slots populated: 1/1 May 27 03:54:40.853177 kernel: Hypervisor detected: KVM May 27 03:54:40.853183 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 27 03:54:40.853189 kernel: kvm-clock: using sched offset of 5737435206 cycles May 27 03:54:40.853195 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 27 03:54:40.853202 kernel: tsc: Detected 2000.000 MHz processor May 27 03:54:40.853208 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 27 03:54:40.853215 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 27 03:54:40.853221 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 May 27 03:54:40.853229 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 27 03:54:40.853235 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 27 03:54:40.853256 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 May 27 03:54:40.853262 kernel: Using GB pages for direct mapping May 27 03:54:40.853277 kernel: ACPI: Early table checksum verification disabled May 27 03:54:40.853293 kernel: ACPI: RSDP 0x00000000000F51B0 000014 (v00 BOCHS ) May 27 03:54:40.853299 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:54:40.853305 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:54:40.853311 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:54:40.853319 kernel: ACPI: FACS 0x000000007FFE0000 000040 May 27 03:54:40.853343 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:54:40.853350 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:54:40.853356 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:54:40.853365 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:54:40.853372 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] May 27 03:54:40.853380 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] May 27 03:54:40.853386 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] May 27 03:54:40.853393 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] May 27 03:54:40.853399 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] May 27 03:54:40.853405 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] May 27 03:54:40.853412 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] May 27 03:54:40.853418 kernel: No NUMA configuration found May 27 03:54:40.853424 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] May 27 03:54:40.853432 kernel: NODE_DATA(0) allocated [mem 0x17fff6dc0-0x17fffdfff] May 27 03:54:40.853438 kernel: Zone ranges: May 27 03:54:40.853445 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 27 03:54:40.853451 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 27 03:54:40.853457 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] May 27 03:54:40.853463 kernel: Device empty May 27 03:54:40.853470 kernel: Movable zone start for each node May 27 03:54:40.853476 kernel: Early memory node ranges May 27 03:54:40.853482 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 27 03:54:40.853488 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] May 27 03:54:40.853496 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] May 27 03:54:40.853503 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] May 27 03:54:40.853509 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 27 03:54:40.853515 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 27 03:54:40.853521 kernel: On node 0, zone Normal: 35 pages in unavailable ranges May 27 03:54:40.853528 kernel: ACPI: PM-Timer IO Port: 0x608 May 27 03:54:40.853534 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 27 03:54:40.853540 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 27 03:54:40.853547 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 27 03:54:40.853555 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 27 03:54:40.853561 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 27 03:54:40.853567 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 27 03:54:40.853574 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 27 03:54:40.853580 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 27 03:54:40.853599 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 27 03:54:40.853605 kernel: TSC deadline timer available May 27 03:54:40.853611 kernel: CPU topo: Max. logical packages: 1 May 27 03:54:40.853618 kernel: CPU topo: Max. logical dies: 1 May 27 03:54:40.853626 kernel: CPU topo: Max. dies per package: 1 May 27 03:54:40.853632 kernel: CPU topo: Max. threads per core: 1 May 27 03:54:40.853638 kernel: CPU topo: Num. cores per package: 2 May 27 03:54:40.853644 kernel: CPU topo: Num. threads per package: 2 May 27 03:54:40.853651 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs May 27 03:54:40.853657 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 27 03:54:40.853663 kernel: kvm-guest: KVM setup pv remote TLB flush May 27 03:54:40.853669 kernel: kvm-guest: setup PV sched yield May 27 03:54:40.853676 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 27 03:54:40.853682 kernel: Booting paravirtualized kernel on KVM May 27 03:54:40.853690 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 27 03:54:40.853697 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 27 03:54:40.853703 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 May 27 03:54:40.853709 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 May 27 03:54:40.853715 kernel: pcpu-alloc: [0] 0 1 May 27 03:54:40.853722 kernel: kvm-guest: PV spinlocks enabled May 27 03:54:40.853728 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 27 03:54:40.853735 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=f6c186658a19d5a08471ef76df75f82494b37b46908f9237b2c3cf497da860c6 May 27 03:54:40.853744 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 27 03:54:40.853750 kernel: random: crng init done May 27 03:54:40.853756 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 27 03:54:40.853763 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 27 03:54:40.853769 kernel: Fallback order for Node 0: 0 May 27 03:54:40.853775 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 May 27 03:54:40.853781 kernel: Policy zone: Normal May 27 03:54:40.853788 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 27 03:54:40.853794 kernel: software IO TLB: area num 2. May 27 03:54:40.853802 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 27 03:54:40.853808 kernel: ftrace: allocating 40081 entries in 157 pages May 27 03:54:40.853815 kernel: ftrace: allocated 157 pages with 5 groups May 27 03:54:40.853821 kernel: Dynamic Preempt: voluntary May 27 03:54:40.853827 kernel: rcu: Preemptible hierarchical RCU implementation. May 27 03:54:40.853834 kernel: rcu: RCU event tracing is enabled. May 27 03:54:40.853841 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 27 03:54:40.853847 kernel: Trampoline variant of Tasks RCU enabled. May 27 03:54:40.853854 kernel: Rude variant of Tasks RCU enabled. May 27 03:54:40.853862 kernel: Tracing variant of Tasks RCU enabled. May 27 03:54:40.853868 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 27 03:54:40.853874 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 27 03:54:40.853881 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 03:54:40.853893 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 03:54:40.853902 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 03:54:40.853908 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 27 03:54:40.853915 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 27 03:54:40.853921 kernel: Console: colour VGA+ 80x25 May 27 03:54:40.853928 kernel: printk: legacy console [tty0] enabled May 27 03:54:40.853935 kernel: printk: legacy console [ttyS0] enabled May 27 03:54:40.853941 kernel: ACPI: Core revision 20240827 May 27 03:54:40.853950 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 27 03:54:40.853956 kernel: APIC: Switch to symmetric I/O mode setup May 27 03:54:40.853963 kernel: x2apic enabled May 27 03:54:40.855011 kernel: APIC: Switched APIC routing to: physical x2apic May 27 03:54:40.855020 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 27 03:54:40.855030 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 27 03:54:40.855037 kernel: kvm-guest: setup PV IPIs May 27 03:54:40.855044 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 27 03:54:40.855050 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns May 27 03:54:40.855057 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) May 27 03:54:40.855064 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 27 03:54:40.855071 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 27 03:54:40.855077 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 27 03:54:40.855086 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 27 03:54:40.855093 kernel: Spectre V2 : Mitigation: Retpolines May 27 03:54:40.855099 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 27 03:54:40.855106 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls May 27 03:54:40.855113 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 27 03:54:40.855119 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 27 03:54:40.855126 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 27 03:54:40.855133 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 27 03:54:40.855140 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 27 03:54:40.855149 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 27 03:54:40.855155 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 27 03:54:40.855162 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 27 03:54:40.855168 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' May 27 03:54:40.855175 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 27 03:54:40.855182 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 May 27 03:54:40.855188 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. May 27 03:54:40.855195 kernel: Freeing SMP alternatives memory: 32K May 27 03:54:40.855203 kernel: pid_max: default: 32768 minimum: 301 May 27 03:54:40.855210 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 27 03:54:40.855217 kernel: landlock: Up and running. May 27 03:54:40.855224 kernel: SELinux: Initializing. May 27 03:54:40.855230 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 03:54:40.855237 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 03:54:40.855244 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) May 27 03:54:40.855250 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 27 03:54:40.855257 kernel: ... version: 0 May 27 03:54:40.855265 kernel: ... bit width: 48 May 27 03:54:40.855272 kernel: ... generic registers: 6 May 27 03:54:40.855284 kernel: ... value mask: 0000ffffffffffff May 27 03:54:40.855301 kernel: ... max period: 00007fffffffffff May 27 03:54:40.855308 kernel: ... fixed-purpose events: 0 May 27 03:54:40.855332 kernel: ... event mask: 000000000000003f May 27 03:54:40.855339 kernel: signal: max sigframe size: 3376 May 27 03:54:40.855345 kernel: rcu: Hierarchical SRCU implementation. May 27 03:54:40.855352 kernel: rcu: Max phase no-delay instances is 400. May 27 03:54:40.855359 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 27 03:54:40.855368 kernel: smp: Bringing up secondary CPUs ... May 27 03:54:40.855374 kernel: smpboot: x86: Booting SMP configuration: May 27 03:54:40.855381 kernel: .... node #0, CPUs: #1 May 27 03:54:40.855392 kernel: smp: Brought up 1 node, 2 CPUs May 27 03:54:40.855399 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) May 27 03:54:40.855406 kernel: Memory: 3961048K/4193772K available (14336K kernel code, 2430K rwdata, 9952K rodata, 54416K init, 2552K bss, 227296K reserved, 0K cma-reserved) May 27 03:54:40.855413 kernel: devtmpfs: initialized May 27 03:54:40.855420 kernel: x86/mm: Memory block size: 128MB May 27 03:54:40.855426 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 27 03:54:40.855435 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 27 03:54:40.855442 kernel: pinctrl core: initialized pinctrl subsystem May 27 03:54:40.855448 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 27 03:54:40.855455 kernel: audit: initializing netlink subsys (disabled) May 27 03:54:40.855461 kernel: audit: type=2000 audit(1748318078.720:1): state=initialized audit_enabled=0 res=1 May 27 03:54:40.855468 kernel: thermal_sys: Registered thermal governor 'step_wise' May 27 03:54:40.855475 kernel: thermal_sys: Registered thermal governor 'user_space' May 27 03:54:40.855481 kernel: cpuidle: using governor menu May 27 03:54:40.855488 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 27 03:54:40.855496 kernel: dca service started, version 1.12.1 May 27 03:54:40.855503 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] May 27 03:54:40.855509 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 27 03:54:40.855516 kernel: PCI: Using configuration type 1 for base access May 27 03:54:40.855523 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 27 03:54:40.855529 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 27 03:54:40.855536 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 27 03:54:40.855542 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 27 03:54:40.855551 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 27 03:54:40.855557 kernel: ACPI: Added _OSI(Module Device) May 27 03:54:40.855564 kernel: ACPI: Added _OSI(Processor Device) May 27 03:54:40.855571 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 27 03:54:40.855577 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 27 03:54:40.855584 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 27 03:54:40.855590 kernel: ACPI: Interpreter enabled May 27 03:54:40.855597 kernel: ACPI: PM: (supports S0 S3 S5) May 27 03:54:40.855603 kernel: ACPI: Using IOAPIC for interrupt routing May 27 03:54:40.855610 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 27 03:54:40.855618 kernel: PCI: Using E820 reservations for host bridge windows May 27 03:54:40.855625 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 27 03:54:40.855632 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 27 03:54:40.855793 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 27 03:54:40.855906 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 27 03:54:40.856052 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 27 03:54:40.856063 kernel: PCI host bridge to bus 0000:00 May 27 03:54:40.856180 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 27 03:54:40.856279 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 27 03:54:40.856374 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 27 03:54:40.856468 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] May 27 03:54:40.856561 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 27 03:54:40.856654 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] May 27 03:54:40.856751 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 27 03:54:40.856874 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint May 27 03:54:40.857514 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint May 27 03:54:40.857636 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] May 27 03:54:40.857743 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] May 27 03:54:40.857849 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] May 27 03:54:40.858020 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 27 03:54:40.858149 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint May 27 03:54:40.858256 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] May 27 03:54:40.858361 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] May 27 03:54:40.858472 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] May 27 03:54:40.858584 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 27 03:54:40.858690 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] May 27 03:54:40.858799 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] May 27 03:54:40.858907 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] May 27 03:54:40.860479 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] May 27 03:54:40.860605 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint May 27 03:54:40.860714 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 27 03:54:40.860827 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint May 27 03:54:40.860932 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] May 27 03:54:40.861129 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] May 27 03:54:40.861251 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint May 27 03:54:40.861357 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] May 27 03:54:40.861374 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 27 03:54:40.861381 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 27 03:54:40.861388 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 27 03:54:40.861395 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 27 03:54:40.861401 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 27 03:54:40.861411 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 27 03:54:40.861418 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 27 03:54:40.861424 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 27 03:54:40.861431 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 27 03:54:40.861438 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 27 03:54:40.861444 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 27 03:54:40.861451 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 27 03:54:40.861457 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 27 03:54:40.861464 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 27 03:54:40.861472 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 27 03:54:40.861479 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 27 03:54:40.861486 kernel: iommu: Default domain type: Translated May 27 03:54:40.861492 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 27 03:54:40.861499 kernel: PCI: Using ACPI for IRQ routing May 27 03:54:40.861505 kernel: PCI: pci_cache_line_size set to 64 bytes May 27 03:54:40.861512 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] May 27 03:54:40.861519 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] May 27 03:54:40.861624 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 27 03:54:40.861731 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 27 03:54:40.861835 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 27 03:54:40.861845 kernel: vgaarb: loaded May 27 03:54:40.861852 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 27 03:54:40.861858 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 27 03:54:40.861865 kernel: clocksource: Switched to clocksource kvm-clock May 27 03:54:40.861872 kernel: VFS: Disk quotas dquot_6.6.0 May 27 03:54:40.861879 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 27 03:54:40.861888 kernel: pnp: PnP ACPI init May 27 03:54:40.863062 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved May 27 03:54:40.863077 kernel: pnp: PnP ACPI: found 5 devices May 27 03:54:40.863085 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 27 03:54:40.863092 kernel: NET: Registered PF_INET protocol family May 27 03:54:40.863099 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 27 03:54:40.863106 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 27 03:54:40.863112 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 27 03:54:40.863119 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 27 03:54:40.863129 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 27 03:54:40.863136 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 27 03:54:40.863143 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 03:54:40.863150 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 03:54:40.863156 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 27 03:54:40.863163 kernel: NET: Registered PF_XDP protocol family May 27 03:54:40.863267 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 27 03:54:40.863364 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 27 03:54:40.863463 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 27 03:54:40.863558 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] May 27 03:54:40.863666 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 27 03:54:40.863762 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] May 27 03:54:40.863771 kernel: PCI: CLS 0 bytes, default 64 May 27 03:54:40.863778 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 27 03:54:40.863785 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) May 27 03:54:40.863792 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns May 27 03:54:40.863799 kernel: Initialise system trusted keyrings May 27 03:54:40.863808 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 27 03:54:40.863815 kernel: Key type asymmetric registered May 27 03:54:40.863822 kernel: Asymmetric key parser 'x509' registered May 27 03:54:40.863828 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 27 03:54:40.863835 kernel: io scheduler mq-deadline registered May 27 03:54:40.863841 kernel: io scheduler kyber registered May 27 03:54:40.863848 kernel: io scheduler bfq registered May 27 03:54:40.863854 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 27 03:54:40.863862 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 27 03:54:40.863870 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 27 03:54:40.863877 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 27 03:54:40.863883 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 27 03:54:40.863890 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 27 03:54:40.863897 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 27 03:54:40.863903 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 27 03:54:40.863910 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 27 03:54:40.864079 kernel: rtc_cmos 00:03: RTC can wake from S4 May 27 03:54:40.864188 kernel: rtc_cmos 00:03: registered as rtc0 May 27 03:54:40.864292 kernel: rtc_cmos 00:03: setting system clock to 2025-05-27T03:54:40 UTC (1748318080) May 27 03:54:40.864404 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 27 03:54:40.864413 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 27 03:54:40.864420 kernel: NET: Registered PF_INET6 protocol family May 27 03:54:40.864427 kernel: Segment Routing with IPv6 May 27 03:54:40.864434 kernel: In-situ OAM (IOAM) with IPv6 May 27 03:54:40.864440 kernel: NET: Registered PF_PACKET protocol family May 27 03:54:40.864447 kernel: Key type dns_resolver registered May 27 03:54:40.864456 kernel: IPI shorthand broadcast: enabled May 27 03:54:40.864463 kernel: sched_clock: Marking stable (2679004593, 229368920)->(2947287346, -38913833) May 27 03:54:40.864470 kernel: registered taskstats version 1 May 27 03:54:40.864477 kernel: Loading compiled-in X.509 certificates May 27 03:54:40.864484 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.30-flatcar: ba9eddccb334a70147f3ddfe4fbde029feaa991d' May 27 03:54:40.864490 kernel: Demotion targets for Node 0: null May 27 03:54:40.864497 kernel: Key type .fscrypt registered May 27 03:54:40.864504 kernel: Key type fscrypt-provisioning registered May 27 03:54:40.864511 kernel: ima: No TPM chip found, activating TPM-bypass! May 27 03:54:40.864519 kernel: ima: Allocated hash algorithm: sha1 May 27 03:54:40.864526 kernel: ima: No architecture policies found May 27 03:54:40.864533 kernel: clk: Disabling unused clocks May 27 03:54:40.864540 kernel: Warning: unable to open an initial console. May 27 03:54:40.864547 kernel: Freeing unused kernel image (initmem) memory: 54416K May 27 03:54:40.864553 kernel: Write protecting the kernel read-only data: 24576k May 27 03:54:40.864560 kernel: Freeing unused kernel image (rodata/data gap) memory: 288K May 27 03:54:40.864567 kernel: Run /init as init process May 27 03:54:40.864575 kernel: with arguments: May 27 03:54:40.864582 kernel: /init May 27 03:54:40.864588 kernel: with environment: May 27 03:54:40.864595 kernel: HOME=/ May 27 03:54:40.864602 kernel: TERM=linux May 27 03:54:40.864620 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 27 03:54:40.864630 systemd[1]: Successfully made /usr/ read-only. May 27 03:54:40.864640 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 03:54:40.864650 systemd[1]: Detected virtualization kvm. May 27 03:54:40.864657 systemd[1]: Detected architecture x86-64. May 27 03:54:40.864664 systemd[1]: Running in initrd. May 27 03:54:40.864672 systemd[1]: No hostname configured, using default hostname. May 27 03:54:40.864680 systemd[1]: Hostname set to . May 27 03:54:40.864687 systemd[1]: Initializing machine ID from random generator. May 27 03:54:40.864694 systemd[1]: Queued start job for default target initrd.target. May 27 03:54:40.864702 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 03:54:40.864711 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 03:54:40.864719 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 27 03:54:40.864727 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 03:54:40.864734 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 27 03:54:40.864743 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 27 03:54:40.864751 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 27 03:54:40.864759 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 27 03:54:40.864768 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 03:54:40.864776 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 03:54:40.864783 systemd[1]: Reached target paths.target - Path Units. May 27 03:54:40.864790 systemd[1]: Reached target slices.target - Slice Units. May 27 03:54:40.864798 systemd[1]: Reached target swap.target - Swaps. May 27 03:54:40.864805 systemd[1]: Reached target timers.target - Timer Units. May 27 03:54:40.864813 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 27 03:54:40.864820 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 03:54:40.864829 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 27 03:54:40.864837 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 27 03:54:40.864844 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 03:54:40.864852 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 03:54:40.864859 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 03:54:40.864867 systemd[1]: Reached target sockets.target - Socket Units. May 27 03:54:40.864878 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 27 03:54:40.864885 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 03:54:40.864893 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 27 03:54:40.864900 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 27 03:54:40.864910 systemd[1]: Starting systemd-fsck-usr.service... May 27 03:54:40.864917 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 03:54:40.864924 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 03:54:40.864932 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:54:40.864941 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 27 03:54:40.864949 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 03:54:40.864957 systemd[1]: Finished systemd-fsck-usr.service. May 27 03:54:40.867031 systemd-journald[206]: Collecting audit messages is disabled. May 27 03:54:40.867055 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 03:54:40.867064 systemd-journald[206]: Journal started May 27 03:54:40.867083 systemd-journald[206]: Runtime Journal (/run/log/journal/8deef228203c461d85220a6d04d5cf7a) is 8M, max 78.5M, 70.5M free. May 27 03:54:40.854270 systemd-modules-load[207]: Inserted module 'overlay' May 27 03:54:40.922054 systemd[1]: Started systemd-journald.service - Journal Service. May 27 03:54:40.922074 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 27 03:54:40.922086 kernel: Bridge firewalling registered May 27 03:54:40.885078 systemd-modules-load[207]: Inserted module 'br_netfilter' May 27 03:54:40.926376 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 03:54:40.927165 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:54:40.928354 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 03:54:40.931379 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 27 03:54:40.934741 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 03:54:40.938651 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 03:54:40.944602 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 03:54:40.951469 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 03:54:40.956738 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 03:54:40.960095 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 27 03:54:40.962006 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 03:54:40.962677 systemd-tmpfiles[225]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 27 03:54:40.967378 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 03:54:40.972074 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 03:54:40.982925 dracut-cmdline[242]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=f6c186658a19d5a08471ef76df75f82494b37b46908f9237b2c3cf497da860c6 May 27 03:54:41.014691 systemd-resolved[245]: Positive Trust Anchors: May 27 03:54:41.015076 systemd-resolved[245]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 03:54:41.015105 systemd-resolved[245]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 03:54:41.021088 systemd-resolved[245]: Defaulting to hostname 'linux'. May 27 03:54:41.022101 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 03:54:41.022649 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 03:54:41.064990 kernel: SCSI subsystem initialized May 27 03:54:41.074047 kernel: Loading iSCSI transport class v2.0-870. May 27 03:54:41.083996 kernel: iscsi: registered transport (tcp) May 27 03:54:41.102999 kernel: iscsi: registered transport (qla4xxx) May 27 03:54:41.103022 kernel: QLogic iSCSI HBA Driver May 27 03:54:41.122486 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 03:54:41.143833 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 03:54:41.146602 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 03:54:41.194508 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 27 03:54:41.196436 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 27 03:54:41.240988 kernel: raid6: avx2x4 gen() 31686 MB/s May 27 03:54:41.258992 kernel: raid6: avx2x2 gen() 30490 MB/s May 27 03:54:41.277547 kernel: raid6: avx2x1 gen() 21181 MB/s May 27 03:54:41.277566 kernel: raid6: using algorithm avx2x4 gen() 31686 MB/s May 27 03:54:41.296512 kernel: raid6: .... xor() 5097 MB/s, rmw enabled May 27 03:54:41.296544 kernel: raid6: using avx2x2 recovery algorithm May 27 03:54:41.316000 kernel: xor: automatically using best checksumming function avx May 27 03:54:41.445017 kernel: Btrfs loaded, zoned=no, fsverity=no May 27 03:54:41.453362 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 27 03:54:41.455811 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 03:54:41.483082 systemd-udevd[454]: Using default interface naming scheme 'v255'. May 27 03:54:41.488450 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 03:54:41.492540 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 27 03:54:41.516514 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation May 27 03:54:41.544661 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 27 03:54:41.546749 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 03:54:41.612591 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 03:54:41.616652 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 27 03:54:41.685998 kernel: cryptd: max_cpu_qlen set to 1000 May 27 03:54:41.689992 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues May 27 03:54:41.700995 kernel: libata version 3.00 loaded. May 27 03:54:41.709001 kernel: scsi host0: Virtio SCSI HBA May 27 03:54:41.710793 kernel: ahci 0000:00:1f.2: version 3.0 May 27 03:54:41.711005 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 27 03:54:41.788082 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 03:54:41.795175 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode May 27 03:54:41.797151 kernel: AES CTR mode by8 optimization enabled May 27 03:54:41.797165 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) May 27 03:54:41.797298 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 May 27 03:54:41.797329 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 27 03:54:41.797587 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:54:41.801317 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:54:41.804727 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:54:41.827113 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 27 03:54:41.829050 kernel: scsi host1: ahci May 27 03:54:41.828724 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 03:54:41.833509 kernel: scsi host2: ahci May 27 03:54:41.833697 kernel: scsi host3: ahci May 27 03:54:41.839879 kernel: scsi host4: ahci May 27 03:54:41.852384 kernel: scsi host5: ahci May 27 03:54:41.855000 kernel: scsi host6: ahci May 27 03:54:41.859236 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 29 lpm-pol 0 May 27 03:54:41.859258 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 29 lpm-pol 0 May 27 03:54:41.863415 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 29 lpm-pol 0 May 27 03:54:41.863436 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 29 lpm-pol 0 May 27 03:54:41.866460 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 29 lpm-pol 0 May 27 03:54:41.868483 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 29 lpm-pol 0 May 27 03:54:41.922163 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:54:42.183021 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 27 03:54:42.183068 kernel: ata3: SATA link down (SStatus 0 SControl 300) May 27 03:54:42.183080 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 27 03:54:42.186990 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 27 03:54:42.187019 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 27 03:54:42.191989 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 27 03:54:42.208506 kernel: sd 0:0:0:0: Power-on or device reset occurred May 27 03:54:42.212314 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) May 27 03:54:42.212665 kernel: sd 0:0:0:0: [sda] Write Protect is off May 27 03:54:42.212799 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 May 27 03:54:42.212929 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 27 03:54:42.237502 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 27 03:54:42.237527 kernel: GPT:9289727 != 167739391 May 27 03:54:42.237540 kernel: GPT:Alternate GPT header not at the end of the disk. May 27 03:54:42.238929 kernel: GPT:9289727 != 167739391 May 27 03:54:42.240239 kernel: GPT: Use GNU Parted to correct GPT errors. May 27 03:54:42.242467 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 27 03:54:42.243721 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 27 03:54:42.288256 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. May 27 03:54:42.308216 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. May 27 03:54:42.309160 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 27 03:54:42.318151 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 27 03:54:42.325434 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. May 27 03:54:42.326047 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. May 27 03:54:42.328397 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 27 03:54:42.329031 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 03:54:42.330234 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 03:54:42.333075 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 27 03:54:42.336061 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 27 03:54:42.349858 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 27 03:54:42.356650 disk-uuid[639]: Primary Header is updated. May 27 03:54:42.356650 disk-uuid[639]: Secondary Entries is updated. May 27 03:54:42.356650 disk-uuid[639]: Secondary Header is updated. May 27 03:54:42.360337 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 27 03:54:43.383166 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 27 03:54:43.384625 disk-uuid[647]: The operation has completed successfully. May 27 03:54:43.456844 systemd[1]: disk-uuid.service: Deactivated successfully. May 27 03:54:43.457021 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 27 03:54:43.473045 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 27 03:54:43.487226 sh[661]: Success May 27 03:54:43.506201 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 27 03:54:43.506240 kernel: device-mapper: uevent: version 1.0.3 May 27 03:54:43.508018 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 27 03:54:43.521006 kernel: device-mapper: verity: sha256 using shash "sha256-ni" May 27 03:54:43.571477 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 27 03:54:43.576052 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 27 03:54:43.587908 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 27 03:54:43.599454 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 27 03:54:43.599488 kernel: BTRFS: device fsid f0f66fe8-3990-49eb-980e-559a3dfd3522 devid 1 transid 40 /dev/mapper/usr (254:0) scanned by mount (673) May 27 03:54:43.605208 kernel: BTRFS info (device dm-0): first mount of filesystem f0f66fe8-3990-49eb-980e-559a3dfd3522 May 27 03:54:43.605239 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 27 03:54:43.607134 kernel: BTRFS info (device dm-0): using free-space-tree May 27 03:54:43.618480 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 27 03:54:43.619988 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 27 03:54:43.620856 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 27 03:54:43.621677 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 27 03:54:43.625723 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 27 03:54:43.654023 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 (8:6) scanned by mount (708) May 27 03:54:43.658167 kernel: BTRFS info (device sda6): first mount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:54:43.658194 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 27 03:54:43.658206 kernel: BTRFS info (device sda6): using free-space-tree May 27 03:54:43.677030 kernel: BTRFS info (device sda6): last unmount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:54:43.678132 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 27 03:54:43.679807 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 27 03:54:43.766172 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 03:54:43.771090 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 03:54:43.795080 ignition[773]: Ignition 2.21.0 May 27 03:54:43.795099 ignition[773]: Stage: fetch-offline May 27 03:54:43.795126 ignition[773]: no configs at "/usr/lib/ignition/base.d" May 27 03:54:43.795137 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 27 03:54:43.795220 ignition[773]: parsed url from cmdline: "" May 27 03:54:43.797194 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 27 03:54:43.795224 ignition[773]: no config URL provided May 27 03:54:43.795230 ignition[773]: reading system config file "/usr/lib/ignition/user.ign" May 27 03:54:43.795239 ignition[773]: no config at "/usr/lib/ignition/user.ign" May 27 03:54:43.795244 ignition[773]: failed to fetch config: resource requires networking May 27 03:54:43.795503 ignition[773]: Ignition finished successfully May 27 03:54:43.813647 systemd-networkd[847]: lo: Link UP May 27 03:54:43.813660 systemd-networkd[847]: lo: Gained carrier May 27 03:54:43.815117 systemd-networkd[847]: Enumeration completed May 27 03:54:43.815488 systemd-networkd[847]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:54:43.815492 systemd-networkd[847]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 03:54:43.816071 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 03:54:43.817372 systemd-networkd[847]: eth0: Link UP May 27 03:54:43.817376 systemd-networkd[847]: eth0: Gained carrier May 27 03:54:43.817384 systemd-networkd[847]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:54:43.817870 systemd[1]: Reached target network.target - Network. May 27 03:54:43.821419 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 27 03:54:43.850287 ignition[851]: Ignition 2.21.0 May 27 03:54:43.850300 ignition[851]: Stage: fetch May 27 03:54:43.850437 ignition[851]: no configs at "/usr/lib/ignition/base.d" May 27 03:54:43.850448 ignition[851]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 27 03:54:43.850534 ignition[851]: parsed url from cmdline: "" May 27 03:54:43.850539 ignition[851]: no config URL provided May 27 03:54:43.850543 ignition[851]: reading system config file "/usr/lib/ignition/user.ign" May 27 03:54:43.850552 ignition[851]: no config at "/usr/lib/ignition/user.ign" May 27 03:54:43.850595 ignition[851]: PUT http://169.254.169.254/v1/token: attempt #1 May 27 03:54:43.850781 ignition[851]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable May 27 03:54:44.051666 ignition[851]: PUT http://169.254.169.254/v1/token: attempt #2 May 27 03:54:44.051747 ignition[851]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable May 27 03:54:44.272032 systemd-networkd[847]: eth0: DHCPv4 address 172.237.145.45/24, gateway 172.237.145.1 acquired from 23.192.120.37 May 27 03:54:44.452452 ignition[851]: PUT http://169.254.169.254/v1/token: attempt #3 May 27 03:54:44.547864 ignition[851]: PUT result: OK May 27 03:54:44.547947 ignition[851]: GET http://169.254.169.254/v1/user-data: attempt #1 May 27 03:54:44.661955 ignition[851]: GET result: OK May 27 03:54:44.662157 ignition[851]: parsing config with SHA512: d89b8bce1434700d7ef0b9d97af22956d1fe97f1d99f25d465beaf5e2020ee2a41484d447abc5bbe2512bf789159ff98a41213a5d70f2f10ca2a54bb2102e020 May 27 03:54:44.665688 unknown[851]: fetched base config from "system" May 27 03:54:44.665706 unknown[851]: fetched base config from "system" May 27 03:54:44.665715 unknown[851]: fetched user config from "akamai" May 27 03:54:44.668299 ignition[851]: fetch: fetch complete May 27 03:54:44.668314 ignition[851]: fetch: fetch passed May 27 03:54:44.668409 ignition[851]: Ignition finished successfully May 27 03:54:44.672509 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 27 03:54:44.674945 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 27 03:54:44.704112 ignition[859]: Ignition 2.21.0 May 27 03:54:44.704127 ignition[859]: Stage: kargs May 27 03:54:44.704242 ignition[859]: no configs at "/usr/lib/ignition/base.d" May 27 03:54:44.704254 ignition[859]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 27 03:54:44.707712 ignition[859]: kargs: kargs passed May 27 03:54:44.707773 ignition[859]: Ignition finished successfully May 27 03:54:44.711384 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 27 03:54:44.713957 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 27 03:54:44.741936 ignition[866]: Ignition 2.21.0 May 27 03:54:44.741952 ignition[866]: Stage: disks May 27 03:54:44.742086 ignition[866]: no configs at "/usr/lib/ignition/base.d" May 27 03:54:44.742098 ignition[866]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 27 03:54:44.744502 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 27 03:54:44.742642 ignition[866]: disks: disks passed May 27 03:54:44.745276 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 27 03:54:44.742676 ignition[866]: Ignition finished successfully May 27 03:54:44.746359 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 27 03:54:44.747484 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 03:54:44.748485 systemd[1]: Reached target sysinit.target - System Initialization. May 27 03:54:44.749687 systemd[1]: Reached target basic.target - Basic System. May 27 03:54:44.751430 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 27 03:54:44.780094 systemd-fsck[874]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 27 03:54:44.784088 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 27 03:54:44.787752 systemd[1]: Mounting sysroot.mount - /sysroot... May 27 03:54:44.890988 kernel: EXT4-fs (sda9): mounted filesystem 18301365-b380-45d7-9677-e42472a122bc r/w with ordered data mode. Quota mode: none. May 27 03:54:44.892118 systemd[1]: Mounted sysroot.mount - /sysroot. May 27 03:54:44.893209 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 27 03:54:44.894984 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 03:54:44.898034 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 27 03:54:44.899318 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 27 03:54:44.899364 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 27 03:54:44.899387 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 27 03:54:44.912865 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 27 03:54:44.914234 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 27 03:54:44.923002 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 (8:6) scanned by mount (882) May 27 03:54:44.928000 kernel: BTRFS info (device sda6): first mount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:54:44.928031 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 27 03:54:44.928043 kernel: BTRFS info (device sda6): using free-space-tree May 27 03:54:44.935661 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 03:54:44.971610 initrd-setup-root[906]: cut: /sysroot/etc/passwd: No such file or directory May 27 03:54:44.976685 initrd-setup-root[913]: cut: /sysroot/etc/group: No such file or directory May 27 03:54:44.981165 initrd-setup-root[920]: cut: /sysroot/etc/shadow: No such file or directory May 27 03:54:44.986024 initrd-setup-root[927]: cut: /sysroot/etc/gshadow: No such file or directory May 27 03:54:45.081582 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 27 03:54:45.084283 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 27 03:54:45.086079 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 27 03:54:45.103839 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 27 03:54:45.105327 kernel: BTRFS info (device sda6): last unmount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:54:45.120070 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 27 03:54:45.130926 ignition[999]: INFO : Ignition 2.21.0 May 27 03:54:45.130926 ignition[999]: INFO : Stage: mount May 27 03:54:45.133287 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 03:54:45.133287 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 27 03:54:45.133287 ignition[999]: INFO : mount: mount passed May 27 03:54:45.133287 ignition[999]: INFO : Ignition finished successfully May 27 03:54:45.135047 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 27 03:54:45.138452 systemd[1]: Starting ignition-files.service - Ignition (files)... May 27 03:54:45.788176 systemd-networkd[847]: eth0: Gained IPv6LL May 27 03:54:45.894071 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 03:54:45.917264 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 (8:6) scanned by mount (1009) May 27 03:54:45.917325 kernel: BTRFS info (device sda6): first mount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:54:45.921103 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 27 03:54:45.921121 kernel: BTRFS info (device sda6): using free-space-tree May 27 03:54:45.929929 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 03:54:45.958862 ignition[1025]: INFO : Ignition 2.21.0 May 27 03:54:45.958862 ignition[1025]: INFO : Stage: files May 27 03:54:45.960196 ignition[1025]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 03:54:45.960196 ignition[1025]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 27 03:54:45.960196 ignition[1025]: DEBUG : files: compiled without relabeling support, skipping May 27 03:54:45.962430 ignition[1025]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 27 03:54:45.962430 ignition[1025]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 27 03:54:45.965095 ignition[1025]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 27 03:54:45.966034 ignition[1025]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 27 03:54:45.966810 ignition[1025]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 27 03:54:45.966404 unknown[1025]: wrote ssh authorized keys file for user: core May 27 03:54:45.968289 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" May 27 03:54:45.968289 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 May 27 03:54:46.280396 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 27 03:54:46.597263 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" May 27 03:54:46.597263 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 27 03:54:46.600040 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 27 03:54:46.600040 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 27 03:54:46.600040 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 27 03:54:46.600040 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 03:54:46.600040 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 03:54:46.600040 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 03:54:46.600040 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 03:54:46.600040 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 27 03:54:46.600040 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 27 03:54:46.600040 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 27 03:54:46.608525 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 27 03:54:46.608525 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 27 03:54:46.608525 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 May 27 03:54:47.092196 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 27 03:54:47.323501 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 27 03:54:47.323501 ignition[1025]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 27 03:54:47.326049 ignition[1025]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 03:54:47.328100 ignition[1025]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 03:54:47.328100 ignition[1025]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 27 03:54:47.328100 ignition[1025]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 27 03:54:47.328100 ignition[1025]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 27 03:54:47.328100 ignition[1025]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 27 03:54:47.328100 ignition[1025]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 27 03:54:47.328100 ignition[1025]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" May 27 03:54:47.328100 ignition[1025]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" May 27 03:54:47.328100 ignition[1025]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" May 27 03:54:47.328100 ignition[1025]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" May 27 03:54:47.328100 ignition[1025]: INFO : files: files passed May 27 03:54:47.328100 ignition[1025]: INFO : Ignition finished successfully May 27 03:54:47.330432 systemd[1]: Finished ignition-files.service - Ignition (files). May 27 03:54:47.334080 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 27 03:54:47.338121 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 27 03:54:47.347833 systemd[1]: ignition-quench.service: Deactivated successfully. May 27 03:54:47.347940 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 27 03:54:47.354364 initrd-setup-root-after-ignition[1060]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 03:54:47.355339 initrd-setup-root-after-ignition[1056]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 03:54:47.355339 initrd-setup-root-after-ignition[1056]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 27 03:54:47.356891 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 03:54:47.358196 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 27 03:54:47.359711 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 27 03:54:47.397924 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 27 03:54:47.398061 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 27 03:54:47.399308 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 27 03:54:47.400298 systemd[1]: Reached target initrd.target - Initrd Default Target. May 27 03:54:47.401473 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 27 03:54:47.402198 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 27 03:54:47.432834 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 03:54:47.434575 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 27 03:54:47.447728 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 27 03:54:47.448496 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 03:54:47.449746 systemd[1]: Stopped target timers.target - Timer Units. May 27 03:54:47.450918 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 27 03:54:47.451060 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 03:54:47.452258 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 27 03:54:47.452993 systemd[1]: Stopped target basic.target - Basic System. May 27 03:54:47.454211 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 27 03:54:47.455257 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 27 03:54:47.456329 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 27 03:54:47.457575 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 27 03:54:47.458793 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 27 03:54:47.460054 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 27 03:54:47.461235 systemd[1]: Stopped target sysinit.target - System Initialization. May 27 03:54:47.462534 systemd[1]: Stopped target local-fs.target - Local File Systems. May 27 03:54:47.464478 systemd[1]: Stopped target swap.target - Swaps. May 27 03:54:47.465517 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 27 03:54:47.465647 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 27 03:54:47.466926 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 27 03:54:47.467747 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 03:54:47.468944 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 27 03:54:47.469313 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 03:54:47.470293 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 27 03:54:47.470421 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 27 03:54:47.471928 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 27 03:54:47.472054 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 03:54:47.472812 systemd[1]: ignition-files.service: Deactivated successfully. May 27 03:54:47.472939 systemd[1]: Stopped ignition-files.service - Ignition (files). May 27 03:54:47.475038 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 27 03:54:47.476766 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 27 03:54:47.476882 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 27 03:54:47.481256 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 27 03:54:47.483035 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 27 03:54:47.483160 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 27 03:54:47.486603 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 27 03:54:47.486700 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 27 03:54:47.492875 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 27 03:54:47.494436 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 27 03:54:47.505344 ignition[1080]: INFO : Ignition 2.21.0 May 27 03:54:47.506844 ignition[1080]: INFO : Stage: umount May 27 03:54:47.506844 ignition[1080]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 03:54:47.506844 ignition[1080]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 27 03:54:47.506844 ignition[1080]: INFO : umount: umount passed May 27 03:54:47.506844 ignition[1080]: INFO : Ignition finished successfully May 27 03:54:47.513440 systemd[1]: ignition-mount.service: Deactivated successfully. May 27 03:54:47.513577 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 27 03:54:47.522802 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 27 03:54:47.523526 systemd[1]: ignition-disks.service: Deactivated successfully. May 27 03:54:47.523605 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 27 03:54:47.524377 systemd[1]: ignition-kargs.service: Deactivated successfully. May 27 03:54:47.524430 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 27 03:54:47.525768 systemd[1]: ignition-fetch.service: Deactivated successfully. May 27 03:54:47.525814 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 27 03:54:47.526956 systemd[1]: Stopped target network.target - Network. May 27 03:54:47.528937 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 27 03:54:47.529009 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 27 03:54:47.531625 systemd[1]: Stopped target paths.target - Path Units. May 27 03:54:47.532229 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 27 03:54:47.538779 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 03:54:47.540195 systemd[1]: Stopped target slices.target - Slice Units. May 27 03:54:47.540666 systemd[1]: Stopped target sockets.target - Socket Units. May 27 03:54:47.541230 systemd[1]: iscsid.socket: Deactivated successfully. May 27 03:54:47.541272 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 27 03:54:47.541780 systemd[1]: iscsiuio.socket: Deactivated successfully. May 27 03:54:47.541819 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 03:54:47.543962 systemd[1]: ignition-setup.service: Deactivated successfully. May 27 03:54:47.544031 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 27 03:54:47.545779 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 27 03:54:47.545826 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 27 03:54:47.546603 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 27 03:54:47.548208 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 27 03:54:47.549759 systemd[1]: sysroot-boot.service: Deactivated successfully. May 27 03:54:47.549866 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 27 03:54:47.551607 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 27 03:54:47.551689 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 27 03:54:47.557235 systemd[1]: systemd-resolved.service: Deactivated successfully. May 27 03:54:47.557659 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 27 03:54:47.561694 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 27 03:54:47.562021 systemd[1]: systemd-networkd.service: Deactivated successfully. May 27 03:54:47.562171 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 27 03:54:47.564202 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 27 03:54:47.564888 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 27 03:54:47.565849 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 27 03:54:47.565891 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 27 03:54:47.568006 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 27 03:54:47.571038 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 27 03:54:47.571093 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 03:54:47.573127 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 03:54:47.573176 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 03:54:47.575186 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 27 03:54:47.575236 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 27 03:54:47.576554 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 27 03:54:47.576601 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 03:54:47.577696 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 03:54:47.580178 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 27 03:54:47.580241 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 27 03:54:47.595899 systemd[1]: systemd-udevd.service: Deactivated successfully. May 27 03:54:47.596147 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 03:54:47.598705 systemd[1]: network-cleanup.service: Deactivated successfully. May 27 03:54:47.598830 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 27 03:54:47.600792 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 27 03:54:47.600875 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 27 03:54:47.601519 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 27 03:54:47.601558 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 27 03:54:47.602715 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 27 03:54:47.602766 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 27 03:54:47.604504 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 27 03:54:47.604550 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 27 03:54:47.605761 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 27 03:54:47.605815 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 03:54:47.607904 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 27 03:54:47.609742 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 27 03:54:47.609797 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 27 03:54:47.611953 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 27 03:54:47.612023 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 03:54:47.615088 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 03:54:47.615138 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:54:47.618311 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 27 03:54:47.618370 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 27 03:54:47.618417 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 03:54:47.626784 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 27 03:54:47.626902 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 27 03:54:47.628437 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 27 03:54:47.630356 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 27 03:54:47.646446 systemd[1]: Switching root. May 27 03:54:47.683178 systemd-journald[206]: Journal stopped May 27 03:54:48.697485 systemd-journald[206]: Received SIGTERM from PID 1 (systemd). May 27 03:54:48.697511 kernel: SELinux: policy capability network_peer_controls=1 May 27 03:54:48.697523 kernel: SELinux: policy capability open_perms=1 May 27 03:54:48.697535 kernel: SELinux: policy capability extended_socket_class=1 May 27 03:54:48.697544 kernel: SELinux: policy capability always_check_network=0 May 27 03:54:48.697552 kernel: SELinux: policy capability cgroup_seclabel=1 May 27 03:54:48.697562 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 27 03:54:48.697571 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 27 03:54:48.697579 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 27 03:54:48.697588 kernel: SELinux: policy capability userspace_initial_context=0 May 27 03:54:48.697776 kernel: audit: type=1403 audit(1748318087.813:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 27 03:54:48.697786 systemd[1]: Successfully loaded SELinux policy in 58.139ms. May 27 03:54:48.697796 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.438ms. May 27 03:54:48.697807 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 03:54:48.697817 systemd[1]: Detected virtualization kvm. May 27 03:54:48.697829 systemd[1]: Detected architecture x86-64. May 27 03:54:48.697838 systemd[1]: Detected first boot. May 27 03:54:48.697847 systemd[1]: Initializing machine ID from random generator. May 27 03:54:48.697857 zram_generator::config[1124]: No configuration found. May 27 03:54:48.697867 kernel: Guest personality initialized and is inactive May 27 03:54:48.697876 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 27 03:54:48.697884 kernel: Initialized host personality May 27 03:54:48.697895 kernel: NET: Registered PF_VSOCK protocol family May 27 03:54:48.697904 systemd[1]: Populated /etc with preset unit settings. May 27 03:54:48.697915 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 27 03:54:48.697924 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 27 03:54:48.697934 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 27 03:54:48.697943 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 27 03:54:48.697953 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 27 03:54:48.697964 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 27 03:54:48.698008 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 27 03:54:48.698018 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 27 03:54:48.698028 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 27 03:54:48.698038 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 27 03:54:48.698048 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 27 03:54:48.698058 systemd[1]: Created slice user.slice - User and Session Slice. May 27 03:54:48.698070 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 03:54:48.698080 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 03:54:48.698090 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 27 03:54:48.698099 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 27 03:54:48.698112 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 27 03:54:48.698122 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 03:54:48.698132 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 27 03:54:48.698142 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 03:54:48.698154 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 03:54:48.698164 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 27 03:54:48.698173 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 27 03:54:48.698184 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 27 03:54:48.698194 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 27 03:54:48.698204 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 03:54:48.698214 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 03:54:48.698223 systemd[1]: Reached target slices.target - Slice Units. May 27 03:54:48.698235 systemd[1]: Reached target swap.target - Swaps. May 27 03:54:48.698246 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 27 03:54:48.698255 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 27 03:54:48.698265 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 27 03:54:48.698275 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 03:54:48.698287 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 03:54:48.698297 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 03:54:48.698308 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 27 03:54:48.698318 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 27 03:54:48.698328 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 27 03:54:48.698337 systemd[1]: Mounting media.mount - External Media Directory... May 27 03:54:48.698348 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:54:48.698357 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 27 03:54:48.698369 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 27 03:54:48.698379 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 27 03:54:48.698389 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 27 03:54:48.698399 systemd[1]: Reached target machines.target - Containers. May 27 03:54:48.698409 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 27 03:54:48.698419 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:54:48.698429 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 03:54:48.698439 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 27 03:54:48.698451 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 03:54:48.698461 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 03:54:48.698471 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 03:54:48.698481 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 27 03:54:48.698490 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 03:54:48.698500 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 27 03:54:48.698510 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 27 03:54:48.698520 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 27 03:54:48.698530 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 27 03:54:48.698542 systemd[1]: Stopped systemd-fsck-usr.service. May 27 03:54:48.698552 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:54:48.698562 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 03:54:48.698572 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 03:54:48.698582 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 03:54:48.698592 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 27 03:54:48.698602 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 27 03:54:48.698612 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 03:54:48.698624 systemd[1]: verity-setup.service: Deactivated successfully. May 27 03:54:48.698634 systemd[1]: Stopped verity-setup.service. May 27 03:54:48.698645 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:54:48.698654 kernel: loop: module loaded May 27 03:54:48.698664 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 27 03:54:48.698674 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 27 03:54:48.698684 systemd[1]: Mounted media.mount - External Media Directory. May 27 03:54:48.698693 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 27 03:54:48.698705 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 27 03:54:48.698715 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 27 03:54:48.698725 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 27 03:54:48.698735 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 03:54:48.698745 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 27 03:54:48.698754 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 27 03:54:48.698764 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 03:54:48.698774 kernel: fuse: init (API version 7.41) May 27 03:54:48.698783 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 03:54:48.698795 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 03:54:48.698805 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 03:54:48.698815 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 27 03:54:48.698824 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 27 03:54:48.698834 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 03:54:48.698844 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 03:54:48.698854 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 03:54:48.698864 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 03:54:48.698874 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 27 03:54:48.698886 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 03:54:48.698896 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 27 03:54:48.699141 systemd-journald[1215]: Collecting audit messages is disabled. May 27 03:54:48.699165 systemd-journald[1215]: Journal started May 27 03:54:48.699186 systemd-journald[1215]: Runtime Journal (/run/log/journal/ec3fd38d8bf2483ca56f0b255ce71fdc) is 8M, max 78.5M, 70.5M free. May 27 03:54:48.729020 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 27 03:54:48.729054 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 27 03:54:48.729071 kernel: ACPI: bus type drm_connector registered May 27 03:54:48.729085 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 03:54:48.729102 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 27 03:54:48.345197 systemd[1]: Queued start job for default target multi-user.target. May 27 03:54:48.365701 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 27 03:54:48.366422 systemd[1]: systemd-journald.service: Deactivated successfully. May 27 03:54:48.739024 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 27 03:54:48.739051 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:54:48.754283 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 27 03:54:48.754317 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 03:54:48.759078 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 27 03:54:48.763989 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 03:54:48.770986 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 03:54:48.777041 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 27 03:54:48.798769 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 27 03:54:48.798803 systemd[1]: Started systemd-journald.service - Journal Service. May 27 03:54:48.800312 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 03:54:48.802225 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 03:54:48.804101 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 27 03:54:48.806403 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 03:54:48.810004 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 27 03:54:48.811676 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 27 03:54:48.813594 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 27 03:54:48.827012 kernel: loop0: detected capacity change from 0 to 113872 May 27 03:54:48.840760 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 03:54:48.858236 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 27 03:54:48.865147 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 27 03:54:48.871738 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 27 03:54:48.874220 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 27 03:54:48.876214 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 27 03:54:48.880028 kernel: loop1: detected capacity change from 0 to 229808 May 27 03:54:48.882641 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 03:54:48.907819 systemd-journald[1215]: Time spent on flushing to /var/log/journal/ec3fd38d8bf2483ca56f0b255ce71fdc is 16.593ms for 1006 entries. May 27 03:54:48.907819 systemd-journald[1215]: System Journal (/var/log/journal/ec3fd38d8bf2483ca56f0b255ce71fdc) is 8M, max 195.6M, 187.6M free. May 27 03:54:48.934442 systemd-journald[1215]: Received client request to flush runtime journal. May 27 03:54:48.934487 kernel: loop2: detected capacity change from 0 to 8 May 27 03:54:48.917410 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 27 03:54:48.938307 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 27 03:54:48.955478 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. May 27 03:54:48.955870 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. May 27 03:54:48.966393 kernel: loop3: detected capacity change from 0 to 146240 May 27 03:54:48.969497 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 03:54:49.014002 kernel: loop4: detected capacity change from 0 to 113872 May 27 03:54:49.031023 kernel: loop5: detected capacity change from 0 to 229808 May 27 03:54:49.051996 kernel: loop6: detected capacity change from 0 to 8 May 27 03:54:49.058062 kernel: loop7: detected capacity change from 0 to 146240 May 27 03:54:49.076749 (sd-merge)[1273]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. May 27 03:54:49.077336 (sd-merge)[1273]: Merged extensions into '/usr'. May 27 03:54:49.083024 systemd[1]: Reload requested from client PID 1230 ('systemd-sysext') (unit systemd-sysext.service)... May 27 03:54:49.083110 systemd[1]: Reloading... May 27 03:54:49.179052 zram_generator::config[1299]: No configuration found. May 27 03:54:49.309714 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:54:49.353001 ldconfig[1226]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 27 03:54:49.382530 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 27 03:54:49.382961 systemd[1]: Reloading finished in 299 ms. May 27 03:54:49.409256 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 27 03:54:49.410353 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 27 03:54:49.420096 systemd[1]: Starting ensure-sysext.service... May 27 03:54:49.423083 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 03:54:49.454931 systemd[1]: Reload requested from client PID 1342 ('systemctl') (unit ensure-sysext.service)... May 27 03:54:49.454944 systemd[1]: Reloading... May 27 03:54:49.481145 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 27 03:54:49.481179 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 27 03:54:49.481440 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 27 03:54:49.481848 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 27 03:54:49.484565 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 27 03:54:49.484815 systemd-tmpfiles[1343]: ACLs are not supported, ignoring. May 27 03:54:49.484878 systemd-tmpfiles[1343]: ACLs are not supported, ignoring. May 27 03:54:49.494493 systemd-tmpfiles[1343]: Detected autofs mount point /boot during canonicalization of boot. May 27 03:54:49.494506 systemd-tmpfiles[1343]: Skipping /boot May 27 03:54:49.510339 systemd-tmpfiles[1343]: Detected autofs mount point /boot during canonicalization of boot. May 27 03:54:49.510357 systemd-tmpfiles[1343]: Skipping /boot May 27 03:54:49.542046 zram_generator::config[1370]: No configuration found. May 27 03:54:49.636605 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:54:49.716689 systemd[1]: Reloading finished in 261 ms. May 27 03:54:49.737268 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 27 03:54:49.751311 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 03:54:49.760896 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 03:54:49.764841 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 27 03:54:49.769133 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 27 03:54:49.775197 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 03:54:49.782341 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 03:54:49.786205 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 27 03:54:49.790231 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:54:49.791159 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:54:49.793045 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 03:54:49.804526 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 03:54:49.810230 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 03:54:49.810853 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:54:49.810942 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:54:49.811069 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:54:49.813120 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 03:54:49.813441 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 03:54:49.822888 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 27 03:54:49.825704 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:54:49.827999 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:54:49.829779 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 03:54:49.830436 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:54:49.830560 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:54:49.834237 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 27 03:54:49.840264 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 27 03:54:49.840834 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:54:49.842322 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 03:54:49.842541 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 03:54:49.844813 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 03:54:49.846077 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 03:54:49.853174 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 27 03:54:49.864903 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 03:54:49.865838 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 03:54:49.871535 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:54:49.871758 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:54:49.874107 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 03:54:49.874475 systemd-udevd[1425]: Using default interface naming scheme 'v255'. May 27 03:54:49.877195 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 03:54:49.883287 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 03:54:49.885349 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:54:49.885408 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:54:49.885507 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:54:49.886308 systemd[1]: Finished ensure-sysext.service. May 27 03:54:49.894227 augenrules[1456]: No rules May 27 03:54:49.894871 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 27 03:54:49.897318 systemd[1]: audit-rules.service: Deactivated successfully. May 27 03:54:49.902361 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 03:54:49.910460 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 27 03:54:49.911365 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 03:54:49.911597 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 03:54:49.916078 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 03:54:49.923934 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 27 03:54:49.924637 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 27 03:54:49.927379 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 03:54:49.928242 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 03:54:49.932289 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 03:54:49.932942 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 03:54:49.934168 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 03:54:49.939143 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 03:54:49.944126 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 03:54:49.964690 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 27 03:54:50.109044 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 27 03:54:50.189015 kernel: mousedev: PS/2 mouse device common for all mice May 27 03:54:50.205084 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 27 03:54:50.213113 systemd-networkd[1472]: lo: Link UP May 27 03:54:50.213125 systemd-networkd[1472]: lo: Gained carrier May 27 03:54:50.218729 systemd-networkd[1472]: Enumeration completed May 27 03:54:50.218835 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 03:54:50.219155 systemd-networkd[1472]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:54:50.219160 systemd-networkd[1472]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 03:54:50.219812 systemd-networkd[1472]: eth0: Link UP May 27 03:54:50.222047 systemd-networkd[1472]: eth0: Gained carrier May 27 03:54:50.222065 systemd-networkd[1472]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:54:50.224249 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 27 03:54:50.230168 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 27 03:54:50.245004 kernel: ACPI: button: Power Button [PWRF] May 27 03:54:50.262095 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 27 03:54:50.278654 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 27 03:54:50.287097 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 27 03:54:50.298227 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 27 03:54:50.298474 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 27 03:54:50.297960 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 27 03:54:50.300026 systemd[1]: Reached target time-set.target - System Time Set. May 27 03:54:50.316021 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 27 03:54:50.324720 systemd-resolved[1418]: Positive Trust Anchors: May 27 03:54:50.324740 systemd-resolved[1418]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 03:54:50.324767 systemd-resolved[1418]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 03:54:50.330175 systemd-resolved[1418]: Defaulting to hostname 'linux'. May 27 03:54:50.332015 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 03:54:50.333047 systemd[1]: Reached target network.target - Network. May 27 03:54:50.333562 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 03:54:50.334138 systemd[1]: Reached target sysinit.target - System Initialization. May 27 03:54:50.334951 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 27 03:54:50.335585 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 27 03:54:50.336270 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 27 03:54:50.337212 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 27 03:54:50.340212 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 27 03:54:50.340934 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 27 03:54:50.341514 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 27 03:54:50.341545 systemd[1]: Reached target paths.target - Path Units. May 27 03:54:50.342272 systemd[1]: Reached target timers.target - Timer Units. May 27 03:54:50.345064 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 27 03:54:50.349296 systemd[1]: Starting docker.socket - Docker Socket for the API... May 27 03:54:50.353873 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 27 03:54:50.355958 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 27 03:54:50.356568 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 27 03:54:50.364912 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 27 03:54:50.367228 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 27 03:54:50.369412 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 27 03:54:50.371859 systemd[1]: Reached target sockets.target - Socket Units. May 27 03:54:50.373594 systemd[1]: Reached target basic.target - Basic System. May 27 03:54:50.375002 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 27 03:54:50.375437 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 27 03:54:50.377919 systemd[1]: Starting containerd.service - containerd container runtime... May 27 03:54:50.381188 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 27 03:54:50.384863 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 27 03:54:50.393468 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 27 03:54:50.397369 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 27 03:54:50.401167 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 27 03:54:50.402043 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 27 03:54:50.407759 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 27 03:54:50.414481 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 27 03:54:50.415912 jq[1538]: false May 27 03:54:50.418165 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 27 03:54:50.427840 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 27 03:54:50.434849 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 27 03:54:50.446563 systemd[1]: Starting systemd-logind.service - User Login Management... May 27 03:54:50.447844 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 27 03:54:50.448303 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 27 03:54:50.450886 systemd[1]: Starting update-engine.service - Update Engine... May 27 03:54:50.457101 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 27 03:54:50.467221 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 27 03:54:50.468110 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 27 03:54:50.468886 google_oslogin_nss_cache[1540]: oslogin_cache_refresh[1540]: Refreshing passwd entry cache May 27 03:54:50.468501 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 27 03:54:50.469188 oslogin_cache_refresh[1540]: Refreshing passwd entry cache May 27 03:54:50.488109 systemd[1]: motdgen.service: Deactivated successfully. May 27 03:54:50.488367 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 27 03:54:50.490401 google_oslogin_nss_cache[1540]: oslogin_cache_refresh[1540]: Failure getting users, quitting May 27 03:54:50.490401 google_oslogin_nss_cache[1540]: oslogin_cache_refresh[1540]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 03:54:50.490401 google_oslogin_nss_cache[1540]: oslogin_cache_refresh[1540]: Refreshing group entry cache May 27 03:54:50.490401 google_oslogin_nss_cache[1540]: oslogin_cache_refresh[1540]: Failure getting groups, quitting May 27 03:54:50.490401 google_oslogin_nss_cache[1540]: oslogin_cache_refresh[1540]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 03:54:50.489402 oslogin_cache_refresh[1540]: Failure getting users, quitting May 27 03:54:50.489421 oslogin_cache_refresh[1540]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 03:54:50.489460 oslogin_cache_refresh[1540]: Refreshing group entry cache May 27 03:54:50.489926 oslogin_cache_refresh[1540]: Failure getting groups, quitting May 27 03:54:50.489934 oslogin_cache_refresh[1540]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 03:54:50.502664 update_engine[1549]: I20250527 03:54:50.500746 1549 main.cc:92] Flatcar Update Engine starting May 27 03:54:50.502544 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 27 03:54:50.508459 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 27 03:54:50.510379 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 27 03:54:50.510592 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 27 03:54:50.524427 jq[1551]: true May 27 03:54:50.524626 extend-filesystems[1539]: Found loop4 May 27 03:54:50.524626 extend-filesystems[1539]: Found loop5 May 27 03:54:50.524626 extend-filesystems[1539]: Found loop6 May 27 03:54:50.524626 extend-filesystems[1539]: Found loop7 May 27 03:54:50.524626 extend-filesystems[1539]: Found sda May 27 03:54:50.524626 extend-filesystems[1539]: Found sda1 May 27 03:54:50.524626 extend-filesystems[1539]: Found sda2 May 27 03:54:50.524626 extend-filesystems[1539]: Found sda3 May 27 03:54:50.524626 extend-filesystems[1539]: Found usr May 27 03:54:50.524626 extend-filesystems[1539]: Found sda4 May 27 03:54:50.524626 extend-filesystems[1539]: Found sda6 May 27 03:54:50.524626 extend-filesystems[1539]: Found sda7 May 27 03:54:50.524626 extend-filesystems[1539]: Found sda9 May 27 03:54:50.524626 extend-filesystems[1539]: Checking size of /dev/sda9 May 27 03:54:50.535296 (ntainerd)[1570]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 27 03:54:50.561679 coreos-metadata[1535]: May 27 03:54:50.532 INFO Putting http://169.254.169.254/v1/token: Attempt #1 May 27 03:54:50.566158 jq[1569]: true May 27 03:54:50.569327 tar[1558]: linux-amd64/LICENSE May 27 03:54:50.569327 tar[1558]: linux-amd64/helm May 27 03:54:50.570998 extend-filesystems[1539]: Resized partition /dev/sda9 May 27 03:54:50.586911 extend-filesystems[1582]: resize2fs 1.47.2 (1-Jan-2025) May 27 03:54:50.596767 dbus-daemon[1536]: [system] SELinux support is enabled May 27 03:54:50.597087 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 27 03:54:50.601785 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 27 03:54:50.601812 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 27 03:54:50.606036 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 27 03:54:50.606057 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 27 03:54:50.623961 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks May 27 03:54:50.621432 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:54:50.629990 systemd[1]: Started update-engine.service - Update Engine. May 27 03:54:50.630814 update_engine[1549]: I20250527 03:54:50.630126 1549 update_check_scheduler.cc:74] Next update check in 5m50s May 27 03:54:50.638886 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 27 03:54:50.685059 systemd-networkd[1472]: eth0: DHCPv4 address 172.237.145.45/24, gateway 172.237.145.1 acquired from 23.192.120.37 May 27 03:54:50.685529 dbus-daemon[1536]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1472 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 27 03:54:50.688601 systemd-timesyncd[1461]: Network configuration changed, trying to establish connection. May 27 03:54:50.690639 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... May 27 03:54:50.734992 kernel: EDAC MC: Ver: 3.0.0 May 27 03:54:50.737391 bash[1604]: Updated "/home/core/.ssh/authorized_keys" May 27 03:54:50.739795 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 27 03:54:50.745079 systemd[1]: Starting sshkeys.service... May 27 03:54:50.766994 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 27 03:54:50.769026 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 27 03:54:50.856483 coreos-metadata[1612]: May 27 03:54:50.856 INFO Putting http://169.254.169.254/v1/token: Attempt #1 May 27 03:54:50.876465 sshd_keygen[1579]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 27 03:54:50.909327 containerd[1570]: time="2025-05-27T03:54:50Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 27 03:54:50.912570 locksmithd[1588]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 27 03:54:50.921350 containerd[1570]: time="2025-05-27T03:54:50.921314608Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 27 03:54:51.943732 systemd-timesyncd[1461]: Contacted time server 23.142.248.9:123 (0.flatcar.pool.ntp.org). May 27 03:54:51.943950 systemd-timesyncd[1461]: Initial clock synchronization to Tue 2025-05-27 03:54:51.942958 UTC. May 27 03:54:51.944848 systemd-resolved[1418]: Clock change detected. Flushing caches. May 27 03:54:51.974635 containerd[1570]: time="2025-05-27T03:54:51.974596387Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.95µs" May 27 03:54:51.974635 containerd[1570]: time="2025-05-27T03:54:51.974631947Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 27 03:54:51.974702 containerd[1570]: time="2025-05-27T03:54:51.974650177Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 27 03:54:51.975165 containerd[1570]: time="2025-05-27T03:54:51.974823497Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 27 03:54:51.975165 containerd[1570]: time="2025-05-27T03:54:51.974844207Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 27 03:54:51.975165 containerd[1570]: time="2025-05-27T03:54:51.974867997Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 03:54:51.975165 containerd[1570]: time="2025-05-27T03:54:51.974931307Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 03:54:51.975165 containerd[1570]: time="2025-05-27T03:54:51.974942367Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 03:54:51.975165 containerd[1570]: time="2025-05-27T03:54:51.975159617Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 03:54:51.984472 containerd[1570]: time="2025-05-27T03:54:51.975180637Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 03:54:51.984472 containerd[1570]: time="2025-05-27T03:54:51.982776793Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 03:54:51.984472 containerd[1570]: time="2025-05-27T03:54:51.982820673Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 27 03:54:51.984472 containerd[1570]: time="2025-05-27T03:54:51.982993853Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 27 03:54:51.985366 containerd[1570]: time="2025-05-27T03:54:51.985335842Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 03:54:51.985926 containerd[1570]: time="2025-05-27T03:54:51.985894831Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 03:54:51.992663 containerd[1570]: time="2025-05-27T03:54:51.991527449Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 27 03:54:51.992663 containerd[1570]: time="2025-05-27T03:54:51.991664549Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 27 03:54:51.992663 containerd[1570]: time="2025-05-27T03:54:51.992017338Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 27 03:54:51.992663 containerd[1570]: time="2025-05-27T03:54:51.992177168Z" level=info msg="metadata content store policy set" policy=shared May 27 03:54:51.993228 coreos-metadata[1612]: May 27 03:54:51.993 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 May 27 03:54:52.008143 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 27 03:54:52.008428 containerd[1570]: time="2025-05-27T03:54:52.008397370Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 27 03:54:52.008461 containerd[1570]: time="2025-05-27T03:54:52.008451700Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 27 03:54:52.008494 containerd[1570]: time="2025-05-27T03:54:52.008467160Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 27 03:54:52.008494 containerd[1570]: time="2025-05-27T03:54:52.008479230Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 27 03:54:52.008704 containerd[1570]: time="2025-05-27T03:54:52.008523920Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 27 03:54:52.008704 containerd[1570]: time="2025-05-27T03:54:52.008543970Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 27 03:54:52.008704 containerd[1570]: time="2025-05-27T03:54:52.008555050Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 27 03:54:52.008704 containerd[1570]: time="2025-05-27T03:54:52.008567330Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 27 03:54:52.008704 containerd[1570]: time="2025-05-27T03:54:52.008583410Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 27 03:54:52.008704 containerd[1570]: time="2025-05-27T03:54:52.008593190Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 27 03:54:52.008704 containerd[1570]: time="2025-05-27T03:54:52.008602030Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 27 03:54:52.008704 containerd[1570]: time="2025-05-27T03:54:52.008613320Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 27 03:54:52.008834 containerd[1570]: time="2025-05-27T03:54:52.008719140Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 27 03:54:52.008834 containerd[1570]: time="2025-05-27T03:54:52.008743240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 27 03:54:52.008834 containerd[1570]: time="2025-05-27T03:54:52.008768160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 27 03:54:52.008834 containerd[1570]: time="2025-05-27T03:54:52.008777890Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 27 03:54:52.008834 containerd[1570]: time="2025-05-27T03:54:52.008787290Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 27 03:54:52.008834 containerd[1570]: time="2025-05-27T03:54:52.008796890Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 27 03:54:52.008834 containerd[1570]: time="2025-05-27T03:54:52.008806440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 27 03:54:52.008834 containerd[1570]: time="2025-05-27T03:54:52.008816360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 27 03:54:52.008834 containerd[1570]: time="2025-05-27T03:54:52.008826060Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 27 03:54:52.008834 containerd[1570]: time="2025-05-27T03:54:52.008835350Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 27 03:54:52.008997 containerd[1570]: time="2025-05-27T03:54:52.008844660Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 27 03:54:52.008997 containerd[1570]: time="2025-05-27T03:54:52.008897170Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 27 03:54:52.008997 containerd[1570]: time="2025-05-27T03:54:52.008908390Z" level=info msg="Start snapshots syncer" May 27 03:54:52.008997 containerd[1570]: time="2025-05-27T03:54:52.008937240Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 27 03:54:52.009265 containerd[1570]: time="2025-05-27T03:54:52.009209030Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 27 03:54:52.009373 containerd[1570]: time="2025-05-27T03:54:52.009280760Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 27 03:54:52.010242 containerd[1570]: time="2025-05-27T03:54:52.010076819Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 27 03:54:52.010319 containerd[1570]: time="2025-05-27T03:54:52.010291359Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 27 03:54:52.010418 containerd[1570]: time="2025-05-27T03:54:52.010337849Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 27 03:54:52.010418 containerd[1570]: time="2025-05-27T03:54:52.010354239Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 27 03:54:52.010418 containerd[1570]: time="2025-05-27T03:54:52.010363779Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 27 03:54:52.010418 containerd[1570]: time="2025-05-27T03:54:52.010374139Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 27 03:54:52.010418 containerd[1570]: time="2025-05-27T03:54:52.010383569Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 27 03:54:52.010418 containerd[1570]: time="2025-05-27T03:54:52.010393509Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 27 03:54:52.010522 containerd[1570]: time="2025-05-27T03:54:52.010455429Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 27 03:54:52.010522 containerd[1570]: time="2025-05-27T03:54:52.010476059Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 27 03:54:52.010522 containerd[1570]: time="2025-05-27T03:54:52.010486949Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 27 03:54:52.010573 containerd[1570]: time="2025-05-27T03:54:52.010527219Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 03:54:52.010573 containerd[1570]: time="2025-05-27T03:54:52.010541819Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 03:54:52.010573 containerd[1570]: time="2025-05-27T03:54:52.010549669Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 03:54:52.010573 containerd[1570]: time="2025-05-27T03:54:52.010557569Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 03:54:52.011003 containerd[1570]: time="2025-05-27T03:54:52.010564389Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 27 03:54:52.011003 containerd[1570]: time="2025-05-27T03:54:52.010663409Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 27 03:54:52.011003 containerd[1570]: time="2025-05-27T03:54:52.010676929Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 27 03:54:52.011003 containerd[1570]: time="2025-05-27T03:54:52.010692269Z" level=info msg="runtime interface created" May 27 03:54:52.011003 containerd[1570]: time="2025-05-27T03:54:52.010697319Z" level=info msg="created NRI interface" May 27 03:54:52.011003 containerd[1570]: time="2025-05-27T03:54:52.010704949Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 27 03:54:52.011003 containerd[1570]: time="2025-05-27T03:54:52.010714779Z" level=info msg="Connect containerd service" May 27 03:54:52.011003 containerd[1570]: time="2025-05-27T03:54:52.010735539Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 27 03:54:52.017222 kernel: EXT4-fs (sda9): resized filesystem to 20360187 May 27 03:54:52.038826 extend-filesystems[1582]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required May 27 03:54:52.038826 extend-filesystems[1582]: old_desc_blocks = 1, new_desc_blocks = 10 May 27 03:54:52.038826 extend-filesystems[1582]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. May 27 03:54:52.038095 systemd-logind[1546]: Watching system buttons on /dev/input/event2 (Power Button) May 27 03:54:52.171156 containerd[1570]: time="2025-05-27T03:54:52.036951286Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 03:54:52.173048 coreos-metadata[1612]: May 27 03:54:52.130 INFO Fetch successful May 27 03:54:52.173091 extend-filesystems[1539]: Resized filesystem in /dev/sda9 May 27 03:54:52.038115 systemd-logind[1546]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 27 03:54:52.110092 systemd[1]: extend-filesystems.service: Deactivated successfully. May 27 03:54:52.110355 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 27 03:54:52.110510 systemd-logind[1546]: New seat seat0. May 27 03:54:52.170596 systemd[1]: Started systemd-logind.service - User Login Management. May 27 03:54:52.178233 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:54:52.200075 systemd[1]: Starting issuegen.service - Generate /run/issue... May 27 03:54:52.230027 systemd[1]: Started systemd-hostnamed.service - Hostname Service. May 27 03:54:52.233940 update-ssh-keys[1648]: Updated "/home/core/.ssh/authorized_keys" May 27 03:54:52.234493 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 27 03:54:52.238056 systemd[1]: Finished sshkeys.service. May 27 03:54:52.238549 dbus-daemon[1536]: [system] Successfully activated service 'org.freedesktop.hostname1' May 27 03:54:52.241490 dbus-daemon[1536]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1606 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 27 03:54:52.242264 systemd[1]: issuegen.service: Deactivated successfully. May 27 03:54:52.242649 systemd[1]: Finished issuegen.service - Generate /run/issue. May 27 03:54:52.243932 containerd[1570]: time="2025-05-27T03:54:52.243905462Z" level=info msg="Start subscribing containerd event" May 27 03:54:52.244053 containerd[1570]: time="2025-05-27T03:54:52.244039422Z" level=info msg="Start recovering state" May 27 03:54:52.244307 containerd[1570]: time="2025-05-27T03:54:52.244293082Z" level=info msg="Start event monitor" May 27 03:54:52.244521 containerd[1570]: time="2025-05-27T03:54:52.244509752Z" level=info msg="Start cni network conf syncer for default" May 27 03:54:52.244563 containerd[1570]: time="2025-05-27T03:54:52.244553132Z" level=info msg="Start streaming server" May 27 03:54:52.244602 containerd[1570]: time="2025-05-27T03:54:52.244592872Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 27 03:54:52.244652 containerd[1570]: time="2025-05-27T03:54:52.244641532Z" level=info msg="runtime interface starting up..." May 27 03:54:52.244695 containerd[1570]: time="2025-05-27T03:54:52.244685022Z" level=info msg="starting plugins..." May 27 03:54:52.244782 containerd[1570]: time="2025-05-27T03:54:52.244769952Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 27 03:54:52.245747 containerd[1570]: time="2025-05-27T03:54:52.245717942Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 27 03:54:52.246397 containerd[1570]: time="2025-05-27T03:54:52.245859161Z" level=info msg=serving... address=/run/containerd/containerd.sock May 27 03:54:52.247954 containerd[1570]: time="2025-05-27T03:54:52.247939640Z" level=info msg="containerd successfully booted in 0.347220s" May 27 03:54:52.249692 systemd[1]: Started containerd.service - containerd container runtime. May 27 03:54:52.256435 systemd[1]: Starting polkit.service - Authorization Manager... May 27 03:54:52.260059 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 27 03:54:52.286736 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 27 03:54:52.292434 systemd[1]: Started getty@tty1.service - Getty on tty1. May 27 03:54:52.294706 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 27 03:54:52.295953 systemd[1]: Reached target getty.target - Login Prompts. May 27 03:54:52.351537 polkitd[1662]: Started polkitd version 126 May 27 03:54:52.355618 polkitd[1662]: Loading rules from directory /etc/polkit-1/rules.d May 27 03:54:52.356002 polkitd[1662]: Loading rules from directory /run/polkit-1/rules.d May 27 03:54:52.356100 polkitd[1662]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) May 27 03:54:52.356403 polkitd[1662]: Loading rules from directory /usr/local/share/polkit-1/rules.d May 27 03:54:52.356464 polkitd[1662]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) May 27 03:54:52.356558 polkitd[1662]: Loading rules from directory /usr/share/polkit-1/rules.d May 27 03:54:52.357171 polkitd[1662]: Finished loading, compiling and executing 2 rules May 27 03:54:52.358574 systemd[1]: Started polkit.service - Authorization Manager. May 27 03:54:52.359862 dbus-daemon[1536]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 27 03:54:52.360335 polkitd[1662]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 27 03:54:52.368319 systemd-resolved[1418]: System hostname changed to '172-237-145-45'. May 27 03:54:52.368325 systemd-hostnamed[1606]: Hostname set to <172-237-145-45> (transient) May 27 03:54:52.509163 tar[1558]: linux-amd64/README.md May 27 03:54:52.524827 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 27 03:54:52.535122 coreos-metadata[1535]: May 27 03:54:52.535 INFO Putting http://169.254.169.254/v1/token: Attempt #2 May 27 03:54:52.626112 coreos-metadata[1535]: May 27 03:54:52.626 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 May 27 03:54:52.807067 coreos-metadata[1535]: May 27 03:54:52.806 INFO Fetch successful May 27 03:54:52.807067 coreos-metadata[1535]: May 27 03:54:52.806 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 May 27 03:54:53.059971 coreos-metadata[1535]: May 27 03:54:53.059 INFO Fetch successful May 27 03:54:53.164560 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 27 03:54:53.165580 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 27 03:54:53.181325 systemd-networkd[1472]: eth0: Gained IPv6LL May 27 03:54:53.183692 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 27 03:54:53.184683 systemd[1]: Reached target network-online.target - Network is Online. May 27 03:54:53.187086 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:54:53.189380 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 27 03:54:53.218028 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 27 03:54:54.135333 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:54:54.136647 systemd[1]: Reached target multi-user.target - Multi-User System. May 27 03:54:54.137849 systemd[1]: Startup finished in 2.764s (kernel) + 7.156s (initrd) + 5.388s (userspace) = 15.308s. May 27 03:54:54.173553 (kubelet)[1716]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 03:54:54.723005 kubelet[1716]: E0527 03:54:54.722715 1716 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 03:54:54.727153 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 03:54:54.727384 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 03:54:54.727911 systemd[1]: kubelet.service: Consumed 909ms CPU time, 268.6M memory peak. May 27 03:54:55.551988 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 27 03:54:55.553359 systemd[1]: Started sshd@0-172.237.145.45:22-139.178.68.195:33900.service - OpenSSH per-connection server daemon (139.178.68.195:33900). May 27 03:54:55.902586 sshd[1728]: Accepted publickey for core from 139.178.68.195 port 33900 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:54:55.904489 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:54:55.910751 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 27 03:54:55.912329 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 27 03:54:55.919945 systemd-logind[1546]: New session 1 of user core. May 27 03:54:55.932587 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 27 03:54:55.935516 systemd[1]: Starting user@500.service - User Manager for UID 500... May 27 03:54:55.950874 (systemd)[1732]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 27 03:54:55.953583 systemd-logind[1546]: New session c1 of user core. May 27 03:54:56.087920 systemd[1732]: Queued start job for default target default.target. May 27 03:54:56.099546 systemd[1732]: Created slice app.slice - User Application Slice. May 27 03:54:56.099574 systemd[1732]: Reached target paths.target - Paths. May 27 03:54:56.099620 systemd[1732]: Reached target timers.target - Timers. May 27 03:54:56.101161 systemd[1732]: Starting dbus.socket - D-Bus User Message Bus Socket... May 27 03:54:56.120342 systemd[1732]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 27 03:54:56.120451 systemd[1732]: Reached target sockets.target - Sockets. May 27 03:54:56.120489 systemd[1732]: Reached target basic.target - Basic System. May 27 03:54:56.120528 systemd[1732]: Reached target default.target - Main User Target. May 27 03:54:56.120559 systemd[1732]: Startup finished in 160ms. May 27 03:54:56.120959 systemd[1]: Started user@500.service - User Manager for UID 500. May 27 03:54:56.130433 systemd[1]: Started session-1.scope - Session 1 of User core. May 27 03:54:56.390988 systemd[1]: Started sshd@1-172.237.145.45:22-139.178.68.195:33916.service - OpenSSH per-connection server daemon (139.178.68.195:33916). May 27 03:54:56.725861 sshd[1743]: Accepted publickey for core from 139.178.68.195 port 33916 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:54:56.728303 sshd-session[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:54:56.735252 systemd-logind[1546]: New session 2 of user core. May 27 03:54:56.749394 systemd[1]: Started session-2.scope - Session 2 of User core. May 27 03:54:56.969314 sshd[1745]: Connection closed by 139.178.68.195 port 33916 May 27 03:54:56.970050 sshd-session[1743]: pam_unix(sshd:session): session closed for user core May 27 03:54:56.975579 systemd-logind[1546]: Session 2 logged out. Waiting for processes to exit. May 27 03:54:56.976821 systemd[1]: sshd@1-172.237.145.45:22-139.178.68.195:33916.service: Deactivated successfully. May 27 03:54:56.979335 systemd[1]: session-2.scope: Deactivated successfully. May 27 03:54:56.981148 systemd-logind[1546]: Removed session 2. May 27 03:54:57.033098 systemd[1]: Started sshd@2-172.237.145.45:22-139.178.68.195:33924.service - OpenSSH per-connection server daemon (139.178.68.195:33924). May 27 03:54:57.383659 sshd[1752]: Accepted publickey for core from 139.178.68.195 port 33924 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:54:57.385280 sshd-session[1752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:54:57.391347 systemd-logind[1546]: New session 3 of user core. May 27 03:54:57.401392 systemd[1]: Started session-3.scope - Session 3 of User core. May 27 03:54:57.631799 sshd[1754]: Connection closed by 139.178.68.195 port 33924 May 27 03:54:57.632645 sshd-session[1752]: pam_unix(sshd:session): session closed for user core May 27 03:54:57.637406 systemd[1]: sshd@2-172.237.145.45:22-139.178.68.195:33924.service: Deactivated successfully. May 27 03:54:57.644092 systemd[1]: session-3.scope: Deactivated successfully. May 27 03:54:57.645029 systemd-logind[1546]: Session 3 logged out. Waiting for processes to exit. May 27 03:54:57.646690 systemd-logind[1546]: Removed session 3. May 27 03:54:57.705072 systemd[1]: Started sshd@3-172.237.145.45:22-139.178.68.195:33938.service - OpenSSH per-connection server daemon (139.178.68.195:33938). May 27 03:54:58.062158 sshd[1760]: Accepted publickey for core from 139.178.68.195 port 33938 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:54:58.063656 sshd-session[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:54:58.070243 systemd-logind[1546]: New session 4 of user core. May 27 03:54:58.080323 systemd[1]: Started session-4.scope - Session 4 of User core. May 27 03:54:58.316764 sshd[1762]: Connection closed by 139.178.68.195 port 33938 May 27 03:54:58.319356 sshd-session[1760]: pam_unix(sshd:session): session closed for user core May 27 03:54:58.324238 systemd[1]: sshd@3-172.237.145.45:22-139.178.68.195:33938.service: Deactivated successfully. May 27 03:54:58.326526 systemd[1]: session-4.scope: Deactivated successfully. May 27 03:54:58.327515 systemd-logind[1546]: Session 4 logged out. Waiting for processes to exit. May 27 03:54:58.328735 systemd-logind[1546]: Removed session 4. May 27 03:54:58.377484 systemd[1]: Started sshd@4-172.237.145.45:22-139.178.68.195:33954.service - OpenSSH per-connection server daemon (139.178.68.195:33954). May 27 03:54:58.718560 sshd[1768]: Accepted publickey for core from 139.178.68.195 port 33954 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:54:58.720459 sshd-session[1768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:54:58.725495 systemd-logind[1546]: New session 5 of user core. May 27 03:54:58.736312 systemd[1]: Started session-5.scope - Session 5 of User core. May 27 03:54:58.925706 sudo[1771]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 27 03:54:58.926001 sudo[1771]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:54:58.937838 sudo[1771]: pam_unix(sudo:session): session closed for user root May 27 03:54:58.989896 sshd[1770]: Connection closed by 139.178.68.195 port 33954 May 27 03:54:58.990459 sshd-session[1768]: pam_unix(sshd:session): session closed for user core May 27 03:54:58.994543 systemd[1]: sshd@4-172.237.145.45:22-139.178.68.195:33954.service: Deactivated successfully. May 27 03:54:58.996127 systemd[1]: session-5.scope: Deactivated successfully. May 27 03:54:58.996863 systemd-logind[1546]: Session 5 logged out. Waiting for processes to exit. May 27 03:54:58.998651 systemd-logind[1546]: Removed session 5. May 27 03:54:59.047959 systemd[1]: Started sshd@5-172.237.145.45:22-139.178.68.195:33964.service - OpenSSH per-connection server daemon (139.178.68.195:33964). May 27 03:54:59.400721 sshd[1777]: Accepted publickey for core from 139.178.68.195 port 33964 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:54:59.402144 sshd-session[1777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:54:59.411204 systemd-logind[1546]: New session 6 of user core. May 27 03:54:59.417321 systemd[1]: Started session-6.scope - Session 6 of User core. May 27 03:54:59.597145 sudo[1781]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 27 03:54:59.597516 sudo[1781]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:54:59.606447 sudo[1781]: pam_unix(sudo:session): session closed for user root May 27 03:54:59.612356 sudo[1780]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 27 03:54:59.612723 sudo[1780]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:54:59.624307 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 03:54:59.669816 augenrules[1803]: No rules May 27 03:54:59.670163 systemd[1]: audit-rules.service: Deactivated successfully. May 27 03:54:59.670554 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 03:54:59.672913 sudo[1780]: pam_unix(sudo:session): session closed for user root May 27 03:54:59.723614 sshd[1779]: Connection closed by 139.178.68.195 port 33964 May 27 03:54:59.724828 sshd-session[1777]: pam_unix(sshd:session): session closed for user core May 27 03:54:59.729996 systemd[1]: sshd@5-172.237.145.45:22-139.178.68.195:33964.service: Deactivated successfully. May 27 03:54:59.732212 systemd[1]: session-6.scope: Deactivated successfully. May 27 03:54:59.733887 systemd-logind[1546]: Session 6 logged out. Waiting for processes to exit. May 27 03:54:59.735446 systemd-logind[1546]: Removed session 6. May 27 03:54:59.796130 systemd[1]: Started sshd@6-172.237.145.45:22-139.178.68.195:33970.service - OpenSSH per-connection server daemon (139.178.68.195:33970). May 27 03:55:00.133822 sshd[1812]: Accepted publickey for core from 139.178.68.195 port 33970 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:55:00.136033 sshd-session[1812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:55:00.142100 systemd-logind[1546]: New session 7 of user core. May 27 03:55:00.151478 systemd[1]: Started session-7.scope - Session 7 of User core. May 27 03:55:00.334399 sudo[1815]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 27 03:55:00.334728 sudo[1815]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:55:00.655064 systemd[1]: Starting docker.service - Docker Application Container Engine... May 27 03:55:00.673531 (dockerd)[1833]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 27 03:55:00.883171 dockerd[1833]: time="2025-05-27T03:55:00.882793622Z" level=info msg="Starting up" May 27 03:55:00.885130 dockerd[1833]: time="2025-05-27T03:55:00.884884851Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 27 03:55:00.940928 dockerd[1833]: time="2025-05-27T03:55:00.940509673Z" level=info msg="Loading containers: start." May 27 03:55:00.951238 kernel: Initializing XFRM netlink socket May 27 03:55:01.196778 systemd-networkd[1472]: docker0: Link UP May 27 03:55:01.200037 dockerd[1833]: time="2025-05-27T03:55:01.199989463Z" level=info msg="Loading containers: done." May 27 03:55:01.214916 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3923134439-merged.mount: Deactivated successfully. May 27 03:55:01.215407 dockerd[1833]: time="2025-05-27T03:55:01.214991026Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 27 03:55:01.215407 dockerd[1833]: time="2025-05-27T03:55:01.215045406Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 27 03:55:01.215407 dockerd[1833]: time="2025-05-27T03:55:01.215144766Z" level=info msg="Initializing buildkit" May 27 03:55:01.241970 dockerd[1833]: time="2025-05-27T03:55:01.241905852Z" level=info msg="Completed buildkit initialization" May 27 03:55:01.247840 dockerd[1833]: time="2025-05-27T03:55:01.247479800Z" level=info msg="Daemon has completed initialization" May 27 03:55:01.247840 dockerd[1833]: time="2025-05-27T03:55:01.247670239Z" level=info msg="API listen on /run/docker.sock" May 27 03:55:01.247738 systemd[1]: Started docker.service - Docker Application Container Engine. May 27 03:55:01.810513 containerd[1570]: time="2025-05-27T03:55:01.810465338Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\"" May 27 03:55:02.794438 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount132273058.mount: Deactivated successfully. May 27 03:55:03.953439 containerd[1570]: time="2025-05-27T03:55:03.952860507Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:03.953439 containerd[1570]: time="2025-05-27T03:55:03.953896776Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.1: active requests=0, bytes read=30075403" May 27 03:55:03.958319 containerd[1570]: time="2025-05-27T03:55:03.957493134Z" level=info msg="ImageCreate event name:\"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:03.959083 containerd[1570]: time="2025-05-27T03:55:03.959002623Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:03.959658 containerd[1570]: time="2025-05-27T03:55:03.959617223Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.1\" with image id \"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\", size \"30072203\" in 2.149104455s" May 27 03:55:03.959704 containerd[1570]: time="2025-05-27T03:55:03.959659503Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\" returns image reference \"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\"" May 27 03:55:03.961715 containerd[1570]: time="2025-05-27T03:55:03.961081662Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\"" May 27 03:55:04.978863 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 27 03:55:04.982491 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:55:05.184375 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:55:05.195843 (kubelet)[2103]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 03:55:05.254051 kubelet[2103]: E0527 03:55:05.253923 2103 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 03:55:05.263675 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 03:55:05.263956 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 03:55:05.264726 systemd[1]: kubelet.service: Consumed 211ms CPU time, 111M memory peak. May 27 03:55:05.584096 containerd[1570]: time="2025-05-27T03:55:05.583967191Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:05.585058 containerd[1570]: time="2025-05-27T03:55:05.585035920Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.1: active requests=0, bytes read=26011390" May 27 03:55:05.585828 containerd[1570]: time="2025-05-27T03:55:05.585794910Z" level=info msg="ImageCreate event name:\"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:05.587954 containerd[1570]: time="2025-05-27T03:55:05.587932279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:05.588815 containerd[1570]: time="2025-05-27T03:55:05.588792648Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.1\" with image id \"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\", size \"27638910\" in 1.627679826s" May 27 03:55:05.588906 containerd[1570]: time="2025-05-27T03:55:05.588891718Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\" returns image reference \"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\"" May 27 03:55:05.592467 containerd[1570]: time="2025-05-27T03:55:05.592249937Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\"" May 27 03:55:06.838312 containerd[1570]: time="2025-05-27T03:55:06.838252003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:06.839275 containerd[1570]: time="2025-05-27T03:55:06.839130643Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.1: active requests=0, bytes read=20148960" May 27 03:55:06.839861 containerd[1570]: time="2025-05-27T03:55:06.839833813Z" level=info msg="ImageCreate event name:\"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:06.841721 containerd[1570]: time="2025-05-27T03:55:06.841694342Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:06.842648 containerd[1570]: time="2025-05-27T03:55:06.842619721Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.1\" with image id \"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\", size \"21776498\" in 1.250149514s" May 27 03:55:06.842691 containerd[1570]: time="2025-05-27T03:55:06.842651681Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\" returns image reference \"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\"" May 27 03:55:06.843572 containerd[1570]: time="2025-05-27T03:55:06.843541431Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\"" May 27 03:55:08.044749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3390370555.mount: Deactivated successfully. May 27 03:55:08.434056 containerd[1570]: time="2025-05-27T03:55:08.433599376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:08.435248 containerd[1570]: time="2025-05-27T03:55:08.435233535Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.1: active requests=0, bytes read=31889075" May 27 03:55:08.436215 containerd[1570]: time="2025-05-27T03:55:08.435936154Z" level=info msg="ImageCreate event name:\"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:08.437926 containerd[1570]: time="2025-05-27T03:55:08.437898833Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:08.440206 containerd[1570]: time="2025-05-27T03:55:08.439632393Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.1\" with image id \"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\", repo tag \"registry.k8s.io/kube-proxy:v1.33.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\", size \"31888094\" in 1.596064432s" May 27 03:55:08.440206 containerd[1570]: time="2025-05-27T03:55:08.439671553Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\" returns image reference \"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\"" May 27 03:55:08.441823 containerd[1570]: time="2025-05-27T03:55:08.441509952Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" May 27 03:55:09.075001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3041483328.mount: Deactivated successfully. May 27 03:55:09.762734 containerd[1570]: time="2025-05-27T03:55:09.762671341Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:09.763729 containerd[1570]: time="2025-05-27T03:55:09.763644830Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" May 27 03:55:09.764277 containerd[1570]: time="2025-05-27T03:55:09.764245420Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:09.767952 containerd[1570]: time="2025-05-27T03:55:09.766608229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:09.767952 containerd[1570]: time="2025-05-27T03:55:09.767697268Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.326144716s" May 27 03:55:09.767952 containerd[1570]: time="2025-05-27T03:55:09.767743248Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" May 27 03:55:09.768620 containerd[1570]: time="2025-05-27T03:55:09.768583338Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 27 03:55:10.334787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount638155000.mount: Deactivated successfully. May 27 03:55:10.339650 containerd[1570]: time="2025-05-27T03:55:10.339603632Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 03:55:10.340329 containerd[1570]: time="2025-05-27T03:55:10.340307352Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 27 03:55:10.341770 containerd[1570]: time="2025-05-27T03:55:10.340935022Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 03:55:10.342638 containerd[1570]: time="2025-05-27T03:55:10.342613391Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 03:55:10.343334 containerd[1570]: time="2025-05-27T03:55:10.343312781Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 574.692413ms" May 27 03:55:10.343409 containerd[1570]: time="2025-05-27T03:55:10.343394960Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 27 03:55:10.343964 containerd[1570]: time="2025-05-27T03:55:10.343894470Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" May 27 03:55:12.513843 containerd[1570]: time="2025-05-27T03:55:12.513762395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:12.521296 containerd[1570]: time="2025-05-27T03:55:12.520830102Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58142739" May 27 03:55:12.525285 containerd[1570]: time="2025-05-27T03:55:12.524539650Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:12.528781 containerd[1570]: time="2025-05-27T03:55:12.528076318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:12.532565 containerd[1570]: time="2025-05-27T03:55:12.532530706Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.188607536s" May 27 03:55:12.532658 containerd[1570]: time="2025-05-27T03:55:12.532642256Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" May 27 03:55:15.007116 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:55:15.007307 systemd[1]: kubelet.service: Consumed 211ms CPU time, 111M memory peak. May 27 03:55:15.009639 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:55:15.035341 systemd[1]: Reload requested from client PID 2218 ('systemctl') (unit session-7.scope)... May 27 03:55:15.035428 systemd[1]: Reloading... May 27 03:55:15.168249 zram_generator::config[2259]: No configuration found. May 27 03:55:15.273399 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:55:15.376499 systemd[1]: Reloading finished in 340 ms. May 27 03:55:15.432702 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 27 03:55:15.432801 systemd[1]: kubelet.service: Failed with result 'signal'. May 27 03:55:15.433126 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:55:15.433217 systemd[1]: kubelet.service: Consumed 137ms CPU time, 98.3M memory peak. May 27 03:55:15.434803 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:55:15.616995 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:55:15.625517 (kubelet)[2316]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 03:55:15.665493 kubelet[2316]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:55:15.665493 kubelet[2316]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 03:55:15.665493 kubelet[2316]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:55:15.665918 kubelet[2316]: I0527 03:55:15.665513 2316 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 03:55:16.496219 kubelet[2316]: I0527 03:55:16.496040 2316 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 27 03:55:16.496219 kubelet[2316]: I0527 03:55:16.496067 2316 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 03:55:16.496508 kubelet[2316]: I0527 03:55:16.496494 2316 server.go:956] "Client rotation is on, will bootstrap in background" May 27 03:55:16.528360 kubelet[2316]: E0527 03:55:16.528322 2316 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.237.145.45:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.237.145.45:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" May 27 03:55:16.528360 kubelet[2316]: I0527 03:55:16.528341 2316 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 03:55:16.534652 kubelet[2316]: I0527 03:55:16.534622 2316 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 03:55:16.540072 kubelet[2316]: I0527 03:55:16.539391 2316 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 03:55:16.540072 kubelet[2316]: I0527 03:55:16.539615 2316 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 03:55:16.540072 kubelet[2316]: I0527 03:55:16.539637 2316 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-237-145-45","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 03:55:16.540072 kubelet[2316]: I0527 03:55:16.539776 2316 topology_manager.go:138] "Creating topology manager with none policy" May 27 03:55:16.540278 kubelet[2316]: I0527 03:55:16.539784 2316 container_manager_linux.go:303] "Creating device plugin manager" May 27 03:55:16.540278 kubelet[2316]: I0527 03:55:16.539903 2316 state_mem.go:36] "Initialized new in-memory state store" May 27 03:55:16.543338 kubelet[2316]: I0527 03:55:16.543280 2316 kubelet.go:480] "Attempting to sync node with API server" May 27 03:55:16.543338 kubelet[2316]: I0527 03:55:16.543313 2316 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 03:55:16.543338 kubelet[2316]: I0527 03:55:16.543336 2316 kubelet.go:386] "Adding apiserver pod source" May 27 03:55:16.543338 kubelet[2316]: I0527 03:55:16.543351 2316 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 03:55:16.551344 kubelet[2316]: I0527 03:55:16.551292 2316 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 03:55:16.551960 kubelet[2316]: I0527 03:55:16.551830 2316 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 27 03:55:16.554017 kubelet[2316]: W0527 03:55:16.552647 2316 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 27 03:55:16.555880 kubelet[2316]: E0527 03:55:16.555476 2316 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.237.145.45:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.237.145.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 27 03:55:16.555880 kubelet[2316]: E0527 03:55:16.555579 2316 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.237.145.45:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-237-145-45&limit=500&resourceVersion=0\": dial tcp 172.237.145.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 27 03:55:16.555956 kubelet[2316]: I0527 03:55:16.555888 2316 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 03:55:16.555956 kubelet[2316]: I0527 03:55:16.555950 2316 server.go:1289] "Started kubelet" May 27 03:55:16.557275 kubelet[2316]: I0527 03:55:16.557250 2316 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 27 03:55:16.558732 kubelet[2316]: I0527 03:55:16.558717 2316 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 03:55:16.559047 kubelet[2316]: I0527 03:55:16.558970 2316 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 03:55:16.559367 kubelet[2316]: I0527 03:55:16.559337 2316 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 03:55:16.563469 kubelet[2316]: E0527 03:55:16.562178 2316 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.237.145.45:6443/api/v1/namespaces/default/events\": dial tcp 172.237.145.45:6443: connect: connection refused" event="&Event{ObjectMeta:{172-237-145-45.1843461063b1ddc7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-237-145-45,UID:172-237-145-45,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-237-145-45,},FirstTimestamp:2025-05-27 03:55:16.555906503 +0000 UTC m=+0.925147648,LastTimestamp:2025-05-27 03:55:16.555906503 +0000 UTC m=+0.925147648,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-237-145-45,}" May 27 03:55:16.564897 kubelet[2316]: E0527 03:55:16.564851 2316 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 03:55:16.565855 kubelet[2316]: I0527 03:55:16.565791 2316 server.go:317] "Adding debug handlers to kubelet server" May 27 03:55:16.566806 kubelet[2316]: I0527 03:55:16.566772 2316 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 03:55:16.568307 kubelet[2316]: I0527 03:55:16.568205 2316 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 03:55:16.569691 kubelet[2316]: I0527 03:55:16.568394 2316 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 03:55:16.570881 kubelet[2316]: E0527 03:55:16.568645 2316 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-237-145-45\" not found" May 27 03:55:16.570881 kubelet[2316]: I0527 03:55:16.569864 2316 reconciler.go:26] "Reconciler: start to sync state" May 27 03:55:16.570881 kubelet[2316]: E0527 03:55:16.569958 2316 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.237.145.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-237-145-45?timeout=10s\": dial tcp 172.237.145.45:6443: connect: connection refused" interval="200ms" May 27 03:55:16.571017 kubelet[2316]: E0527 03:55:16.571001 2316 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.237.145.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.237.145.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 27 03:55:16.571788 kubelet[2316]: I0527 03:55:16.571705 2316 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 03:55:16.574233 kubelet[2316]: I0527 03:55:16.573289 2316 factory.go:223] Registration of the containerd container factory successfully May 27 03:55:16.574233 kubelet[2316]: I0527 03:55:16.573306 2316 factory.go:223] Registration of the systemd container factory successfully May 27 03:55:16.589912 kubelet[2316]: I0527 03:55:16.589846 2316 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 27 03:55:16.597319 kubelet[2316]: I0527 03:55:16.597299 2316 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 03:55:16.597395 kubelet[2316]: I0527 03:55:16.597384 2316 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 03:55:16.597459 kubelet[2316]: I0527 03:55:16.597449 2316 state_mem.go:36] "Initialized new in-memory state store" May 27 03:55:16.598253 kubelet[2316]: I0527 03:55:16.598058 2316 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 27 03:55:16.598742 kubelet[2316]: I0527 03:55:16.598321 2316 status_manager.go:230] "Starting to sync pod status with apiserver" May 27 03:55:16.598742 kubelet[2316]: I0527 03:55:16.598348 2316 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 03:55:16.598742 kubelet[2316]: I0527 03:55:16.598357 2316 kubelet.go:2436] "Starting kubelet main sync loop" May 27 03:55:16.598742 kubelet[2316]: E0527 03:55:16.598405 2316 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 03:55:16.599222 kubelet[2316]: I0527 03:55:16.599209 2316 policy_none.go:49] "None policy: Start" May 27 03:55:16.599567 kubelet[2316]: I0527 03:55:16.599554 2316 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 03:55:16.599625 kubelet[2316]: I0527 03:55:16.599616 2316 state_mem.go:35] "Initializing new in-memory state store" May 27 03:55:16.599765 kubelet[2316]: E0527 03:55:16.599450 2316 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.237.145.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.237.145.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" May 27 03:55:16.607658 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 27 03:55:16.622561 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 27 03:55:16.626366 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 27 03:55:16.637073 kubelet[2316]: E0527 03:55:16.637049 2316 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 27 03:55:16.638606 kubelet[2316]: I0527 03:55:16.638578 2316 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 03:55:16.638656 kubelet[2316]: I0527 03:55:16.638600 2316 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 03:55:16.639156 kubelet[2316]: I0527 03:55:16.638892 2316 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 03:55:16.640829 kubelet[2316]: E0527 03:55:16.640810 2316 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 03:55:16.640910 kubelet[2316]: E0527 03:55:16.640899 2316 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-237-145-45\" not found" May 27 03:55:16.713101 systemd[1]: Created slice kubepods-burstable-pod9d34260864eeae2fe8eafeeb5f28aef8.slice - libcontainer container kubepods-burstable-pod9d34260864eeae2fe8eafeeb5f28aef8.slice. May 27 03:55:16.722064 kubelet[2316]: E0527 03:55:16.722038 2316 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-145-45\" not found" node="172-237-145-45" May 27 03:55:16.725978 systemd[1]: Created slice kubepods-burstable-pod81d9443f45e142e6ddb11e80c6fa57f5.slice - libcontainer container kubepods-burstable-pod81d9443f45e142e6ddb11e80c6fa57f5.slice. May 27 03:55:16.729211 kubelet[2316]: E0527 03:55:16.729153 2316 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-145-45\" not found" node="172-237-145-45" May 27 03:55:16.731293 systemd[1]: Created slice kubepods-burstable-pod6c2d392632e8bfb1ca183f41328750f0.slice - libcontainer container kubepods-burstable-pod6c2d392632e8bfb1ca183f41328750f0.slice. May 27 03:55:16.733427 kubelet[2316]: E0527 03:55:16.733382 2316 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-145-45\" not found" node="172-237-145-45" May 27 03:55:16.741048 kubelet[2316]: I0527 03:55:16.741027 2316 kubelet_node_status.go:75] "Attempting to register node" node="172-237-145-45" May 27 03:55:16.741384 kubelet[2316]: E0527 03:55:16.741359 2316 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.237.145.45:6443/api/v1/nodes\": dial tcp 172.237.145.45:6443: connect: connection refused" node="172-237-145-45" May 27 03:55:16.771873 kubelet[2316]: E0527 03:55:16.771773 2316 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.237.145.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-237-145-45?timeout=10s\": dial tcp 172.237.145.45:6443: connect: connection refused" interval="400ms" May 27 03:55:16.871807 kubelet[2316]: I0527 03:55:16.871727 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9d34260864eeae2fe8eafeeb5f28aef8-ca-certs\") pod \"kube-apiserver-172-237-145-45\" (UID: \"9d34260864eeae2fe8eafeeb5f28aef8\") " pod="kube-system/kube-apiserver-172-237-145-45" May 27 03:55:16.871807 kubelet[2316]: I0527 03:55:16.871768 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9d34260864eeae2fe8eafeeb5f28aef8-k8s-certs\") pod \"kube-apiserver-172-237-145-45\" (UID: \"9d34260864eeae2fe8eafeeb5f28aef8\") " pod="kube-system/kube-apiserver-172-237-145-45" May 27 03:55:16.871807 kubelet[2316]: I0527 03:55:16.871792 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9d34260864eeae2fe8eafeeb5f28aef8-usr-share-ca-certificates\") pod \"kube-apiserver-172-237-145-45\" (UID: \"9d34260864eeae2fe8eafeeb5f28aef8\") " pod="kube-system/kube-apiserver-172-237-145-45" May 27 03:55:16.871938 kubelet[2316]: I0527 03:55:16.871818 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/81d9443f45e142e6ddb11e80c6fa57f5-ca-certs\") pod \"kube-controller-manager-172-237-145-45\" (UID: \"81d9443f45e142e6ddb11e80c6fa57f5\") " pod="kube-system/kube-controller-manager-172-237-145-45" May 27 03:55:16.871938 kubelet[2316]: I0527 03:55:16.871835 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/81d9443f45e142e6ddb11e80c6fa57f5-flexvolume-dir\") pod \"kube-controller-manager-172-237-145-45\" (UID: \"81d9443f45e142e6ddb11e80c6fa57f5\") " pod="kube-system/kube-controller-manager-172-237-145-45" May 27 03:55:16.871938 kubelet[2316]: I0527 03:55:16.871851 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/81d9443f45e142e6ddb11e80c6fa57f5-k8s-certs\") pod \"kube-controller-manager-172-237-145-45\" (UID: \"81d9443f45e142e6ddb11e80c6fa57f5\") " pod="kube-system/kube-controller-manager-172-237-145-45" May 27 03:55:16.871938 kubelet[2316]: I0527 03:55:16.871867 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/81d9443f45e142e6ddb11e80c6fa57f5-kubeconfig\") pod \"kube-controller-manager-172-237-145-45\" (UID: \"81d9443f45e142e6ddb11e80c6fa57f5\") " pod="kube-system/kube-controller-manager-172-237-145-45" May 27 03:55:16.871938 kubelet[2316]: I0527 03:55:16.871884 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/81d9443f45e142e6ddb11e80c6fa57f5-usr-share-ca-certificates\") pod \"kube-controller-manager-172-237-145-45\" (UID: \"81d9443f45e142e6ddb11e80c6fa57f5\") " pod="kube-system/kube-controller-manager-172-237-145-45" May 27 03:55:16.872062 kubelet[2316]: I0527 03:55:16.871902 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6c2d392632e8bfb1ca183f41328750f0-kubeconfig\") pod \"kube-scheduler-172-237-145-45\" (UID: \"6c2d392632e8bfb1ca183f41328750f0\") " pod="kube-system/kube-scheduler-172-237-145-45" May 27 03:55:16.943300 kubelet[2316]: I0527 03:55:16.943254 2316 kubelet_node_status.go:75] "Attempting to register node" node="172-237-145-45" May 27 03:55:16.943691 kubelet[2316]: E0527 03:55:16.943640 2316 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.237.145.45:6443/api/v1/nodes\": dial tcp 172.237.145.45:6443: connect: connection refused" node="172-237-145-45" May 27 03:55:17.023571 kubelet[2316]: E0527 03:55:17.023442 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:55:17.024387 containerd[1570]: time="2025-05-27T03:55:17.024167449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-237-145-45,Uid:9d34260864eeae2fe8eafeeb5f28aef8,Namespace:kube-system,Attempt:0,}" May 27 03:55:17.029789 kubelet[2316]: E0527 03:55:17.029771 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:55:17.030308 containerd[1570]: time="2025-05-27T03:55:17.030231106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-237-145-45,Uid:81d9443f45e142e6ddb11e80c6fa57f5,Namespace:kube-system,Attempt:0,}" May 27 03:55:17.034610 kubelet[2316]: E0527 03:55:17.034496 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:55:17.035096 containerd[1570]: time="2025-05-27T03:55:17.035062694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-237-145-45,Uid:6c2d392632e8bfb1ca183f41328750f0,Namespace:kube-system,Attempt:0,}" May 27 03:55:17.049336 containerd[1570]: time="2025-05-27T03:55:17.049238577Z" level=info msg="connecting to shim 80f35fe96d59b2604dc71d60de87b73cb2d994c29cf234f2e5359cb5f5bce286" address="unix:///run/containerd/s/d1a003762cde736f241e0072cfeb57743883f8986c742c43203e0a6ba95986b4" namespace=k8s.io protocol=ttrpc version=3 May 27 03:55:17.081057 containerd[1570]: time="2025-05-27T03:55:17.081021341Z" level=info msg="connecting to shim 186e06de3be321718ab1948f083cc20d89999743fa3562c83afcc473f03ebea7" address="unix:///run/containerd/s/698db7a8300e9cf5163f66534b7dd72dc87fe1d449570b3a00d5575f21eca17a" namespace=k8s.io protocol=ttrpc version=3 May 27 03:55:17.084430 containerd[1570]: time="2025-05-27T03:55:17.084395269Z" level=info msg="connecting to shim 908ad7f4cef470eed4a14e0e4c106b4704de160683bcc45beedc20eb3b60342b" address="unix:///run/containerd/s/3b035a3ed5ff658b2382cfc6496d07833334496a14a5875cc32665f065ba443d" namespace=k8s.io protocol=ttrpc version=3 May 27 03:55:17.106431 systemd[1]: Started cri-containerd-80f35fe96d59b2604dc71d60de87b73cb2d994c29cf234f2e5359cb5f5bce286.scope - libcontainer container 80f35fe96d59b2604dc71d60de87b73cb2d994c29cf234f2e5359cb5f5bce286. May 27 03:55:17.129417 systemd[1]: Started cri-containerd-908ad7f4cef470eed4a14e0e4c106b4704de160683bcc45beedc20eb3b60342b.scope - libcontainer container 908ad7f4cef470eed4a14e0e4c106b4704de160683bcc45beedc20eb3b60342b. May 27 03:55:17.136694 systemd[1]: Started cri-containerd-186e06de3be321718ab1948f083cc20d89999743fa3562c83afcc473f03ebea7.scope - libcontainer container 186e06de3be321718ab1948f083cc20d89999743fa3562c83afcc473f03ebea7. May 27 03:55:17.173330 kubelet[2316]: E0527 03:55:17.173266 2316 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.237.145.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-237-145-45?timeout=10s\": dial tcp 172.237.145.45:6443: connect: connection refused" interval="800ms" May 27 03:55:17.207172 containerd[1570]: time="2025-05-27T03:55:17.207102058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-237-145-45,Uid:81d9443f45e142e6ddb11e80c6fa57f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"908ad7f4cef470eed4a14e0e4c106b4704de160683bcc45beedc20eb3b60342b\"" May 27 03:55:17.210219 kubelet[2316]: E0527 03:55:17.210128 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:55:17.213049 containerd[1570]: time="2025-05-27T03:55:17.213002615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-237-145-45,Uid:9d34260864eeae2fe8eafeeb5f28aef8,Namespace:kube-system,Attempt:0,} returns sandbox id \"80f35fe96d59b2604dc71d60de87b73cb2d994c29cf234f2e5359cb5f5bce286\"" May 27 03:55:17.213929 kubelet[2316]: E0527 03:55:17.213898 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:55:17.219703 containerd[1570]: time="2025-05-27T03:55:17.219634072Z" level=info msg="CreateContainer within sandbox \"908ad7f4cef470eed4a14e0e4c106b4704de160683bcc45beedc20eb3b60342b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 27 03:55:17.223560 containerd[1570]: time="2025-05-27T03:55:17.223534130Z" level=info msg="CreateContainer within sandbox \"80f35fe96d59b2604dc71d60de87b73cb2d994c29cf234f2e5359cb5f5bce286\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 27 03:55:17.231381 containerd[1570]: time="2025-05-27T03:55:17.231291196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-237-145-45,Uid:6c2d392632e8bfb1ca183f41328750f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"186e06de3be321718ab1948f083cc20d89999743fa3562c83afcc473f03ebea7\"" May 27 03:55:17.232831 kubelet[2316]: E0527 03:55:17.232803 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:55:17.235810 containerd[1570]: time="2025-05-27T03:55:17.234778794Z" level=info msg="Container d0d1911bdd1a62b2453d338c3b3383ec6c337c6e3b3c1489cc833793dcad07da: CDI devices from CRI Config.CDIDevices: []" May 27 03:55:17.236002 containerd[1570]: time="2025-05-27T03:55:17.235981593Z" level=info msg="CreateContainer within sandbox \"186e06de3be321718ab1948f083cc20d89999743fa3562c83afcc473f03ebea7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 27 03:55:17.236898 containerd[1570]: time="2025-05-27T03:55:17.236879443Z" level=info msg="Container 45284ef4c091db5e186be327cd5454b4ce1ea10affd12e6d03396bb56cfc4b17: CDI devices from CRI Config.CDIDevices: []" May 27 03:55:17.243486 containerd[1570]: time="2025-05-27T03:55:17.243434150Z" level=info msg="CreateContainer within sandbox \"908ad7f4cef470eed4a14e0e4c106b4704de160683bcc45beedc20eb3b60342b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d0d1911bdd1a62b2453d338c3b3383ec6c337c6e3b3c1489cc833793dcad07da\"" May 27 03:55:17.245030 containerd[1570]: time="2025-05-27T03:55:17.245011399Z" level=info msg="StartContainer for \"d0d1911bdd1a62b2453d338c3b3383ec6c337c6e3b3c1489cc833793dcad07da\"" May 27 03:55:17.245800 containerd[1570]: time="2025-05-27T03:55:17.245775878Z" level=info msg="CreateContainer within sandbox \"80f35fe96d59b2604dc71d60de87b73cb2d994c29cf234f2e5359cb5f5bce286\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"45284ef4c091db5e186be327cd5454b4ce1ea10affd12e6d03396bb56cfc4b17\"" May 27 03:55:17.247089 containerd[1570]: time="2025-05-27T03:55:17.246952268Z" level=info msg="connecting to shim d0d1911bdd1a62b2453d338c3b3383ec6c337c6e3b3c1489cc833793dcad07da" address="unix:///run/containerd/s/3b035a3ed5ff658b2382cfc6496d07833334496a14a5875cc32665f065ba443d" protocol=ttrpc version=3 May 27 03:55:17.247541 containerd[1570]: time="2025-05-27T03:55:17.247523098Z" level=info msg="Container 9da2a576c6c9f8f6dea541a37aa2213b61de786601534b446f436579021984f4: CDI devices from CRI Config.CDIDevices: []" May 27 03:55:17.248298 containerd[1570]: time="2025-05-27T03:55:17.248275997Z" level=info msg="StartContainer for \"45284ef4c091db5e186be327cd5454b4ce1ea10affd12e6d03396bb56cfc4b17\"" May 27 03:55:17.249608 containerd[1570]: time="2025-05-27T03:55:17.249587517Z" level=info msg="connecting to shim 45284ef4c091db5e186be327cd5454b4ce1ea10affd12e6d03396bb56cfc4b17" address="unix:///run/containerd/s/d1a003762cde736f241e0072cfeb57743883f8986c742c43203e0a6ba95986b4" protocol=ttrpc version=3 May 27 03:55:17.255736 containerd[1570]: time="2025-05-27T03:55:17.255696244Z" level=info msg="CreateContainer within sandbox \"186e06de3be321718ab1948f083cc20d89999743fa3562c83afcc473f03ebea7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9da2a576c6c9f8f6dea541a37aa2213b61de786601534b446f436579021984f4\"" May 27 03:55:17.257403 containerd[1570]: time="2025-05-27T03:55:17.257361083Z" level=info msg="StartContainer for \"9da2a576c6c9f8f6dea541a37aa2213b61de786601534b446f436579021984f4\"" May 27 03:55:17.258950 containerd[1570]: time="2025-05-27T03:55:17.258902252Z" level=info msg="connecting to shim 9da2a576c6c9f8f6dea541a37aa2213b61de786601534b446f436579021984f4" address="unix:///run/containerd/s/698db7a8300e9cf5163f66534b7dd72dc87fe1d449570b3a00d5575f21eca17a" protocol=ttrpc version=3 May 27 03:55:17.281436 systemd[1]: Started cri-containerd-d0d1911bdd1a62b2453d338c3b3383ec6c337c6e3b3c1489cc833793dcad07da.scope - libcontainer container d0d1911bdd1a62b2453d338c3b3383ec6c337c6e3b3c1489cc833793dcad07da. May 27 03:55:17.293361 systemd[1]: Started cri-containerd-9da2a576c6c9f8f6dea541a37aa2213b61de786601534b446f436579021984f4.scope - libcontainer container 9da2a576c6c9f8f6dea541a37aa2213b61de786601534b446f436579021984f4. May 27 03:55:17.299036 systemd[1]: Started cri-containerd-45284ef4c091db5e186be327cd5454b4ce1ea10affd12e6d03396bb56cfc4b17.scope - libcontainer container 45284ef4c091db5e186be327cd5454b4ce1ea10affd12e6d03396bb56cfc4b17. May 27 03:55:17.346081 kubelet[2316]: I0527 03:55:17.346041 2316 kubelet_node_status.go:75] "Attempting to register node" node="172-237-145-45" May 27 03:55:17.346920 kubelet[2316]: E0527 03:55:17.346877 2316 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.237.145.45:6443/api/v1/nodes\": dial tcp 172.237.145.45:6443: connect: connection refused" node="172-237-145-45" May 27 03:55:17.376363 containerd[1570]: time="2025-05-27T03:55:17.376263293Z" level=info msg="StartContainer for \"d0d1911bdd1a62b2453d338c3b3383ec6c337c6e3b3c1489cc833793dcad07da\" returns successfully" May 27 03:55:17.379534 containerd[1570]: time="2025-05-27T03:55:17.379452372Z" level=info msg="StartContainer for \"45284ef4c091db5e186be327cd5454b4ce1ea10affd12e6d03396bb56cfc4b17\" returns successfully" May 27 03:55:17.408744 containerd[1570]: time="2025-05-27T03:55:17.408676587Z" level=info msg="StartContainer for \"9da2a576c6c9f8f6dea541a37aa2213b61de786601534b446f436579021984f4\" returns successfully" May 27 03:55:17.605807 kubelet[2316]: E0527 03:55:17.605489 2316 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-145-45\" not found" node="172-237-145-45" May 27 03:55:17.605807 kubelet[2316]: E0527 03:55:17.605609 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:55:17.611852 kubelet[2316]: E0527 03:55:17.611680 2316 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-145-45\" not found" node="172-237-145-45" May 27 03:55:17.611852 kubelet[2316]: E0527 03:55:17.611762 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:55:17.614128 kubelet[2316]: E0527 03:55:17.614113 2316 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-145-45\" not found" node="172-237-145-45" May 27 03:55:17.614646 kubelet[2316]: E0527 03:55:17.614634 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:55:18.155354 kubelet[2316]: I0527 03:55:18.154615 2316 kubelet_node_status.go:75] "Attempting to register node" node="172-237-145-45" May 27 03:55:18.617542 kubelet[2316]: E0527 03:55:18.616962 2316 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-145-45\" not found" node="172-237-145-45" May 27 03:55:18.617542 kubelet[2316]: E0527 03:55:18.617083 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:55:18.618350 kubelet[2316]: E0527 03:55:18.618255 2316 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-145-45\" not found" node="172-237-145-45" May 27 03:55:18.618436 kubelet[2316]: E0527 03:55:18.618422 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:55:19.302878 kubelet[2316]: E0527 03:55:19.302818 2316 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-237-145-45\" not found" node="172-237-145-45" May 27 03:55:19.462169 kubelet[2316]: I0527 03:55:19.462105 2316 kubelet_node_status.go:78] "Successfully registered node" node="172-237-145-45" May 27 03:55:19.462169 kubelet[2316]: E0527 03:55:19.462161 2316 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"172-237-145-45\": node \"172-237-145-45\" not found" May 27 03:55:19.469565 kubelet[2316]: I0527 03:55:19.469547 2316 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-237-145-45" May 27 03:55:19.482577 kubelet[2316]: E0527 03:55:19.482558 2316 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-237-145-45\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-237-145-45" May 27 03:55:19.482689 kubelet[2316]: I0527 03:55:19.482679 2316 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-237-145-45" May 27 03:55:19.483937 kubelet[2316]: E0527 03:55:19.483901 2316 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-237-145-45\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-237-145-45" May 27 03:55:19.483937 kubelet[2316]: I0527 03:55:19.483930 2316 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-237-145-45" May 27 03:55:19.484852 kubelet[2316]: E0527 03:55:19.484821 2316 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-237-145-45\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-237-145-45" May 27 03:55:19.557774 kubelet[2316]: I0527 03:55:19.557399 2316 apiserver.go:52] "Watching apiserver" May 27 03:55:19.571833 kubelet[2316]: I0527 03:55:19.571804 2316 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 03:55:20.134645 kubelet[2316]: I0527 03:55:20.134615 2316 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-237-145-45" May 27 03:55:20.136432 kubelet[2316]: E0527 03:55:20.136408 2316 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-237-145-45\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-237-145-45" May 27 03:55:20.136563 kubelet[2316]: E0527 03:55:20.136536 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:55:21.088704 systemd[1]: Reload requested from client PID 2597 ('systemctl') (unit session-7.scope)... May 27 03:55:21.088724 systemd[1]: Reloading... May 27 03:55:21.201214 zram_generator::config[2650]: No configuration found. May 27 03:55:21.287966 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:55:21.403612 systemd[1]: Reloading finished in 314 ms. May 27 03:55:21.434023 kubelet[2316]: I0527 03:55:21.433987 2316 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 03:55:21.434456 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:55:21.449528 systemd[1]: kubelet.service: Deactivated successfully. May 27 03:55:21.449849 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:55:21.449902 systemd[1]: kubelet.service: Consumed 1.313s CPU time, 129.5M memory peak. May 27 03:55:21.451901 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:55:21.639797 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:55:21.647766 (kubelet)[2692]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 03:55:21.686827 kubelet[2692]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:55:21.687080 kubelet[2692]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 03:55:21.687080 kubelet[2692]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:55:21.687080 kubelet[2692]: I0527 03:55:21.687043 2692 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 03:55:21.694220 kubelet[2692]: I0527 03:55:21.694181 2692 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 27 03:55:21.694372 kubelet[2692]: I0527 03:55:21.694360 2692 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 03:55:21.694840 kubelet[2692]: I0527 03:55:21.694824 2692 server.go:956] "Client rotation is on, will bootstrap in background" May 27 03:55:21.696685 kubelet[2692]: I0527 03:55:21.696055 2692 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" May 27 03:55:21.701062 kubelet[2692]: I0527 03:55:21.701032 2692 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 03:55:21.706081 kubelet[2692]: I0527 03:55:21.706065 2692 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 03:55:21.709731 kubelet[2692]: I0527 03:55:21.709717 2692 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 03:55:21.710079 kubelet[2692]: I0527 03:55:21.710061 2692 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 03:55:21.710271 kubelet[2692]: I0527 03:55:21.710121 2692 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-237-145-45","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 03:55:21.710484 kubelet[2692]: I0527 03:55:21.710399 2692 topology_manager.go:138] "Creating topology manager with none policy" May 27 03:55:21.710484 kubelet[2692]: I0527 03:55:21.710409 2692 container_manager_linux.go:303] "Creating device plugin manager" May 27 03:55:21.710484 kubelet[2692]: I0527 03:55:21.710448 2692 state_mem.go:36] "Initialized new in-memory state store" May 27 03:55:21.710709 kubelet[2692]: I0527 03:55:21.710696 2692 kubelet.go:480] "Attempting to sync node with API server" May 27 03:55:21.710766 kubelet[2692]: I0527 03:55:21.710756 2692 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 03:55:21.710830 kubelet[2692]: I0527 03:55:21.710821 2692 kubelet.go:386] "Adding apiserver pod source" May 27 03:55:21.710893 kubelet[2692]: I0527 03:55:21.710883 2692 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 03:55:21.715817 kubelet[2692]: I0527 03:55:21.715757 2692 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 03:55:21.716270 kubelet[2692]: I0527 03:55:21.716245 2692 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 27 03:55:21.721658 kubelet[2692]: I0527 03:55:21.720651 2692 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 03:55:21.721658 kubelet[2692]: I0527 03:55:21.720691 2692 server.go:1289] "Started kubelet" May 27 03:55:21.724994 kubelet[2692]: I0527 03:55:21.724966 2692 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 03:55:21.726267 kubelet[2692]: I0527 03:55:21.725802 2692 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 27 03:55:21.727951 kubelet[2692]: I0527 03:55:21.727916 2692 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 03:55:21.728700 kubelet[2692]: I0527 03:55:21.728540 2692 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 03:55:21.729242 kubelet[2692]: I0527 03:55:21.729213 2692 server.go:317] "Adding debug handlers to kubelet server" May 27 03:55:21.732062 kubelet[2692]: I0527 03:55:21.731419 2692 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 03:55:21.734311 kubelet[2692]: I0527 03:55:21.733556 2692 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 03:55:21.735305 kubelet[2692]: I0527 03:55:21.735270 2692 factory.go:223] Registration of the systemd container factory successfully May 27 03:55:21.735517 kubelet[2692]: I0527 03:55:21.735454 2692 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 03:55:21.737347 kubelet[2692]: I0527 03:55:21.737315 2692 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 03:55:21.737447 kubelet[2692]: I0527 03:55:21.737431 2692 reconciler.go:26] "Reconciler: start to sync state" May 27 03:55:21.740176 kubelet[2692]: E0527 03:55:21.739806 2692 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 03:55:21.740953 kubelet[2692]: I0527 03:55:21.740923 2692 factory.go:223] Registration of the containerd container factory successfully May 27 03:55:21.747449 kubelet[2692]: I0527 03:55:21.747429 2692 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 27 03:55:21.748651 kubelet[2692]: I0527 03:55:21.748637 2692 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 27 03:55:21.748724 kubelet[2692]: I0527 03:55:21.748714 2692 status_manager.go:230] "Starting to sync pod status with apiserver" May 27 03:55:21.748786 kubelet[2692]: I0527 03:55:21.748776 2692 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 03:55:21.748876 kubelet[2692]: I0527 03:55:21.748867 2692 kubelet.go:2436] "Starting kubelet main sync loop" May 27 03:55:21.748973 kubelet[2692]: E0527 03:55:21.748948 2692 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 03:55:21.793575 kubelet[2692]: I0527 03:55:21.792694 2692 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 03:55:21.793575 kubelet[2692]: I0527 03:55:21.792706 2692 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 03:55:21.793575 kubelet[2692]: I0527 03:55:21.792722 2692 state_mem.go:36] "Initialized new in-memory state store" May 27 03:55:21.793575 kubelet[2692]: I0527 03:55:21.792813 2692 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 27 03:55:21.793575 kubelet[2692]: I0527 03:55:21.792822 2692 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 27 03:55:21.793575 kubelet[2692]: I0527 03:55:21.792835 2692 policy_none.go:49] "None policy: Start" May 27 03:55:21.793575 kubelet[2692]: I0527 03:55:21.792844 2692 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 03:55:21.793575 kubelet[2692]: I0527 03:55:21.792853 2692 state_mem.go:35] "Initializing new in-memory state store" May 27 03:55:21.793575 kubelet[2692]: I0527 03:55:21.792920 2692 state_mem.go:75] "Updated machine memory state" May 27 03:55:21.797571 kubelet[2692]: E0527 03:55:21.797557 2692 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 27 03:55:21.797918 kubelet[2692]: I0527 03:55:21.797906 2692 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 03:55:21.798240 kubelet[2692]: I0527 03:55:21.798208 2692 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 03:55:21.798755 kubelet[2692]: I0527 03:55:21.798742 2692 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 03:55:21.802770 kubelet[2692]: E0527 03:55:21.802739 2692 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 03:55:21.850360 kubelet[2692]: I0527 03:55:21.850337 2692 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-237-145-45" May 27 03:55:21.850548 kubelet[2692]: I0527 03:55:21.850524 2692 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-237-145-45" May 27 03:55:21.850713 kubelet[2692]: I0527 03:55:21.850399 2692 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-237-145-45" May 27 03:55:21.906767 kubelet[2692]: I0527 03:55:21.906741 2692 kubelet_node_status.go:75] "Attempting to register node" node="172-237-145-45" May 27 03:55:21.914702 kubelet[2692]: I0527 03:55:21.914639 2692 kubelet_node_status.go:124] "Node was previously registered" node="172-237-145-45" May 27 03:55:21.914702 kubelet[2692]: I0527 03:55:21.914700 2692 kubelet_node_status.go:78] "Successfully registered node" node="172-237-145-45" May 27 03:55:22.038609 kubelet[2692]: I0527 03:55:22.038521 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/81d9443f45e142e6ddb11e80c6fa57f5-k8s-certs\") pod \"kube-controller-manager-172-237-145-45\" (UID: \"81d9443f45e142e6ddb11e80c6fa57f5\") " pod="kube-system/kube-controller-manager-172-237-145-45" May 27 03:55:22.038609 kubelet[2692]: I0527 03:55:22.038552 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6c2d392632e8bfb1ca183f41328750f0-kubeconfig\") pod \"kube-scheduler-172-237-145-45\" (UID: \"6c2d392632e8bfb1ca183f41328750f0\") " pod="kube-system/kube-scheduler-172-237-145-45" May 27 03:55:22.038609 kubelet[2692]: I0527 03:55:22.038569 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9d34260864eeae2fe8eafeeb5f28aef8-ca-certs\") pod \"kube-apiserver-172-237-145-45\" (UID: \"9d34260864eeae2fe8eafeeb5f28aef8\") " pod="kube-system/kube-apiserver-172-237-145-45" May 27 03:55:22.038609 kubelet[2692]: I0527 03:55:22.038588 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9d34260864eeae2fe8eafeeb5f28aef8-usr-share-ca-certificates\") pod \"kube-apiserver-172-237-145-45\" (UID: \"9d34260864eeae2fe8eafeeb5f28aef8\") " pod="kube-system/kube-apiserver-172-237-145-45" May 27 03:55:22.038609 kubelet[2692]: I0527 03:55:22.038611 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/81d9443f45e142e6ddb11e80c6fa57f5-flexvolume-dir\") pod \"kube-controller-manager-172-237-145-45\" (UID: \"81d9443f45e142e6ddb11e80c6fa57f5\") " pod="kube-system/kube-controller-manager-172-237-145-45" May 27 03:55:22.038776 kubelet[2692]: I0527 03:55:22.038630 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/81d9443f45e142e6ddb11e80c6fa57f5-kubeconfig\") pod \"kube-controller-manager-172-237-145-45\" (UID: \"81d9443f45e142e6ddb11e80c6fa57f5\") " pod="kube-system/kube-controller-manager-172-237-145-45" May 27 03:55:22.038776 kubelet[2692]: I0527 03:55:22.038646 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/81d9443f45e142e6ddb11e80c6fa57f5-usr-share-ca-certificates\") pod \"kube-controller-manager-172-237-145-45\" (UID: \"81d9443f45e142e6ddb11e80c6fa57f5\") " pod="kube-system/kube-controller-manager-172-237-145-45" May 27 03:55:22.038776 kubelet[2692]: I0527 03:55:22.038661 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9d34260864eeae2fe8eafeeb5f28aef8-k8s-certs\") pod \"kube-apiserver-172-237-145-45\" (UID: \"9d34260864eeae2fe8eafeeb5f28aef8\") " pod="kube-system/kube-apiserver-172-237-145-45" May 27 03:55:22.038776 kubelet[2692]: I0527 03:55:22.038679 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/81d9443f45e142e6ddb11e80c6fa57f5-ca-certs\") pod \"kube-controller-manager-172-237-145-45\" (UID: \"81d9443f45e142e6ddb11e80c6fa57f5\") " pod="kube-system/kube-controller-manager-172-237-145-45" May 27 03:55:22.156208 kubelet[2692]: E0527 03:55:22.156099 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:55:22.156445 kubelet[2692]: E0527 03:55:22.156431 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:55:22.156616 kubelet[2692]: E0527 03:55:22.156602 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:55:22.404277 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 27 03:55:22.713296 kubelet[2692]: I0527 03:55:22.712538 2692 apiserver.go:52] "Watching apiserver" May 27 03:55:22.737853 kubelet[2692]: I0527 03:55:22.737807 2692 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 03:55:22.774664 kubelet[2692]: I0527 03:55:22.774617 2692 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-237-145-45" May 27 03:55:22.775210 kubelet[2692]: E0527 03:55:22.775064 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:55:22.776293 kubelet[2692]: I0527 03:55:22.776258 2692 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-237-145-45" May 27 03:55:22.782768 kubelet[2692]: E0527 03:55:22.782634 2692 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-237-145-45\" already exists" pod="kube-system/kube-apiserver-172-237-145-45" May 27 03:55:22.783288 kubelet[2692]: E0527 03:55:22.783113 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:55:22.783615 kubelet[2692]: E0527 03:55:22.783559 2692 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-237-145-45\" already exists" pod="kube-system/kube-scheduler-172-237-145-45" May 27 03:55:22.784029 kubelet[2692]: E0527 03:55:22.783878 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:55:22.801058 kubelet[2692]: I0527 03:55:22.801008 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-237-145-45" podStartSLOduration=1.80096817 podStartE2EDuration="1.80096817s" podCreationTimestamp="2025-05-27 03:55:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:55:22.800038651 +0000 UTC m=+1.147857727" watchObservedRunningTime="2025-05-27 03:55:22.80096817 +0000 UTC m=+1.148787246" May 27 03:55:22.810653 kubelet[2692]: I0527 03:55:22.810591 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-237-145-45" podStartSLOduration=1.810578605 podStartE2EDuration="1.810578605s" podCreationTimestamp="2025-05-27 03:55:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:55:22.810332186 +0000 UTC m=+1.158151262" watchObservedRunningTime="2025-05-27 03:55:22.810578605 +0000 UTC m=+1.158397681" May 27 03:55:23.776654 kubelet[2692]: E0527 03:55:23.776328 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:55:23.776654 kubelet[2692]: E0527 03:55:23.776369 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:55:24.778116 kubelet[2692]: E0527 03:55:24.778068 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:55:26.357649 kubelet[2692]: E0527 03:55:26.357623 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:55:26.854461 kubelet[2692]: I0527 03:55:26.854352 2692 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 27 03:55:26.855050 containerd[1570]: time="2025-05-27T03:55:26.855022371Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 27 03:55:26.855516 kubelet[2692]: I0527 03:55:26.855183 2692 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 27 03:55:27.930975 kubelet[2692]: I0527 03:55:27.930716 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-237-145-45" podStartSLOduration=6.930681921 podStartE2EDuration="6.930681921s" podCreationTimestamp="2025-05-27 03:55:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:55:22.820100671 +0000 UTC m=+1.167919747" watchObservedRunningTime="2025-05-27 03:55:27.930681921 +0000 UTC m=+6.278500997" May 27 03:55:27.943766 systemd[1]: Created slice kubepods-besteffort-podcd9dba04_57a1_4a95_861f_171fb6522532.slice - libcontainer container kubepods-besteffort-podcd9dba04_57a1_4a95_861f_171fb6522532.slice. May 27 03:55:27.975394 kubelet[2692]: I0527 03:55:27.975267 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cd9dba04-57a1-4a95-861f-171fb6522532-kube-proxy\") pod \"kube-proxy-2gjrc\" (UID: \"cd9dba04-57a1-4a95-861f-171fb6522532\") " pod="kube-system/kube-proxy-2gjrc" May 27 03:55:27.975394 kubelet[2692]: I0527 03:55:27.975370 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzkjh\" (UniqueName: \"kubernetes.io/projected/cd9dba04-57a1-4a95-861f-171fb6522532-kube-api-access-qzkjh\") pod \"kube-proxy-2gjrc\" (UID: \"cd9dba04-57a1-4a95-861f-171fb6522532\") " pod="kube-system/kube-proxy-2gjrc" May 27 03:55:27.975394 kubelet[2692]: I0527 03:55:27.975391 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cd9dba04-57a1-4a95-861f-171fb6522532-xtables-lock\") pod \"kube-proxy-2gjrc\" (UID: \"cd9dba04-57a1-4a95-861f-171fb6522532\") " pod="kube-system/kube-proxy-2gjrc" May 27 03:55:27.975564 kubelet[2692]: I0527 03:55:27.975409 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cd9dba04-57a1-4a95-861f-171fb6522532-lib-modules\") pod \"kube-proxy-2gjrc\" (UID: \"cd9dba04-57a1-4a95-861f-171fb6522532\") " pod="kube-system/kube-proxy-2gjrc" May 27 03:55:28.048154 systemd[1]: Created slice kubepods-besteffort-pod389caa83_c5b0_433d_ba52_4aa58c35d0eb.slice - libcontainer container kubepods-besteffort-pod389caa83_c5b0_433d_ba52_4aa58c35d0eb.slice. May 27 03:55:28.076197 kubelet[2692]: I0527 03:55:28.076129 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzq25\" (UniqueName: \"kubernetes.io/projected/389caa83-c5b0-433d-ba52-4aa58c35d0eb-kube-api-access-gzq25\") pod \"tigera-operator-844669ff44-85gv6\" (UID: \"389caa83-c5b0-433d-ba52-4aa58c35d0eb\") " pod="tigera-operator/tigera-operator-844669ff44-85gv6" May 27 03:55:28.076197 kubelet[2692]: I0527 03:55:28.076198 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/389caa83-c5b0-433d-ba52-4aa58c35d0eb-var-lib-calico\") pod \"tigera-operator-844669ff44-85gv6\" (UID: \"389caa83-c5b0-433d-ba52-4aa58c35d0eb\") " pod="tigera-operator/tigera-operator-844669ff44-85gv6" May 27 03:55:28.253498 kubelet[2692]: E0527 03:55:28.253218 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:55:28.254042 containerd[1570]: time="2025-05-27T03:55:28.253999923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2gjrc,Uid:cd9dba04-57a1-4a95-861f-171fb6522532,Namespace:kube-system,Attempt:0,}" May 27 03:55:28.276404 containerd[1570]: time="2025-05-27T03:55:28.276171859Z" level=info msg="connecting to shim 0829b5171056b03188a8e30acb52b37a68c939c18f4efc610b2ddaac8665163a" address="unix:///run/containerd/s/5bc3083691b51339def52afbfc7a8c4a67fe8472df61f085e0334426653310a3" namespace=k8s.io protocol=ttrpc version=3 May 27 03:55:28.309323 systemd[1]: Started cri-containerd-0829b5171056b03188a8e30acb52b37a68c939c18f4efc610b2ddaac8665163a.scope - libcontainer container 0829b5171056b03188a8e30acb52b37a68c939c18f4efc610b2ddaac8665163a. May 27 03:55:28.338692 containerd[1570]: time="2025-05-27T03:55:28.338648569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2gjrc,Uid:cd9dba04-57a1-4a95-861f-171fb6522532,Namespace:kube-system,Attempt:0,} returns sandbox id \"0829b5171056b03188a8e30acb52b37a68c939c18f4efc610b2ddaac8665163a\"" May 27 03:55:28.339893 kubelet[2692]: E0527 03:55:28.339866 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:55:28.346565 containerd[1570]: time="2025-05-27T03:55:28.346451169Z" level=info msg="CreateContainer within sandbox \"0829b5171056b03188a8e30acb52b37a68c939c18f4efc610b2ddaac8665163a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 27 03:55:28.353419 containerd[1570]: time="2025-05-27T03:55:28.353399201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-85gv6,Uid:389caa83-c5b0-433d-ba52-4aa58c35d0eb,Namespace:tigera-operator,Attempt:0,}" May 27 03:55:28.359911 containerd[1570]: time="2025-05-27T03:55:28.359877844Z" level=info msg="Container 8f931c8afd5c7bb5c7bc15a16108bfd012b28dc20aff025ec4c194ba3147c220: CDI devices from CRI Config.CDIDevices: []" May 27 03:55:28.368440 containerd[1570]: time="2025-05-27T03:55:28.368406961Z" level=info msg="CreateContainer within sandbox \"0829b5171056b03188a8e30acb52b37a68c939c18f4efc610b2ddaac8665163a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8f931c8afd5c7bb5c7bc15a16108bfd012b28dc20aff025ec4c194ba3147c220\"" May 27 03:55:28.370572 containerd[1570]: time="2025-05-27T03:55:28.369560696Z" level=info msg="StartContainer for \"8f931c8afd5c7bb5c7bc15a16108bfd012b28dc20aff025ec4c194ba3147c220\"" May 27 03:55:28.373737 containerd[1570]: time="2025-05-27T03:55:28.373678116Z" level=info msg="connecting to shim 8f931c8afd5c7bb5c7bc15a16108bfd012b28dc20aff025ec4c194ba3147c220" address="unix:///run/containerd/s/5bc3083691b51339def52afbfc7a8c4a67fe8472df61f085e0334426653310a3" protocol=ttrpc version=3 May 27 03:55:28.380122 containerd[1570]: time="2025-05-27T03:55:28.380100347Z" level=info msg="connecting to shim 89f3b6077d06d1fca9c9b2eb6951e376630d0212c3efa0ac6424764543fee57d" address="unix:///run/containerd/s/d6754cd7a23ca84c41e6e14ec6f1267e1a535381d42234da0a4ac13dc33685c4" namespace=k8s.io protocol=ttrpc version=3 May 27 03:55:28.403350 systemd[1]: Started cri-containerd-8f931c8afd5c7bb5c7bc15a16108bfd012b28dc20aff025ec4c194ba3147c220.scope - libcontainer container 8f931c8afd5c7bb5c7bc15a16108bfd012b28dc20aff025ec4c194ba3147c220. May 27 03:55:28.415318 systemd[1]: Started cri-containerd-89f3b6077d06d1fca9c9b2eb6951e376630d0212c3efa0ac6424764543fee57d.scope - libcontainer container 89f3b6077d06d1fca9c9b2eb6951e376630d0212c3efa0ac6424764543fee57d. May 27 03:55:28.457159 containerd[1570]: time="2025-05-27T03:55:28.457113394Z" level=info msg="StartContainer for \"8f931c8afd5c7bb5c7bc15a16108bfd012b28dc20aff025ec4c194ba3147c220\" returns successfully" May 27 03:55:28.483911 containerd[1570]: time="2025-05-27T03:55:28.483860421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-85gv6,Uid:389caa83-c5b0-433d-ba52-4aa58c35d0eb,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"89f3b6077d06d1fca9c9b2eb6951e376630d0212c3efa0ac6424764543fee57d\"" May 27 03:55:28.488345 containerd[1570]: time="2025-05-27T03:55:28.488315379Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 27 03:55:28.787948 kubelet[2692]: E0527 03:55:28.787893 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:55:29.096378 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3769053805.mount: Deactivated successfully. May 27 03:55:29.701767 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2167544290.mount: Deactivated successfully. May 27 03:55:30.206689 containerd[1570]: time="2025-05-27T03:55:30.206636371Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:30.207784 containerd[1570]: time="2025-05-27T03:55:30.207586049Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=25055451" May 27 03:55:30.208650 containerd[1570]: time="2025-05-27T03:55:30.208611979Z" level=info msg="ImageCreate event name:\"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:30.210255 containerd[1570]: time="2025-05-27T03:55:30.210226720Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:30.210847 containerd[1570]: time="2025-05-27T03:55:30.210816941Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"25051446\" in 1.722470102s" May 27 03:55:30.210919 containerd[1570]: time="2025-05-27T03:55:30.210905503Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\"" May 27 03:55:30.214963 containerd[1570]: time="2025-05-27T03:55:30.214917530Z" level=info msg="CreateContainer within sandbox \"89f3b6077d06d1fca9c9b2eb6951e376630d0212c3efa0ac6424764543fee57d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 27 03:55:30.222213 containerd[1570]: time="2025-05-27T03:55:30.222103138Z" level=info msg="Container 88ceba7ca898b461c13890ee172d0d61bdd5ef2e2f8626fea0d102685b9d4a32: CDI devices from CRI Config.CDIDevices: []" May 27 03:55:30.230616 containerd[1570]: time="2025-05-27T03:55:30.230586741Z" level=info msg="CreateContainer within sandbox \"89f3b6077d06d1fca9c9b2eb6951e376630d0212c3efa0ac6424764543fee57d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"88ceba7ca898b461c13890ee172d0d61bdd5ef2e2f8626fea0d102685b9d4a32\"" May 27 03:55:30.231665 containerd[1570]: time="2025-05-27T03:55:30.231597740Z" level=info msg="StartContainer for \"88ceba7ca898b461c13890ee172d0d61bdd5ef2e2f8626fea0d102685b9d4a32\"" May 27 03:55:30.232351 containerd[1570]: time="2025-05-27T03:55:30.232320355Z" level=info msg="connecting to shim 88ceba7ca898b461c13890ee172d0d61bdd5ef2e2f8626fea0d102685b9d4a32" address="unix:///run/containerd/s/d6754cd7a23ca84c41e6e14ec6f1267e1a535381d42234da0a4ac13dc33685c4" protocol=ttrpc version=3 May 27 03:55:30.255506 systemd[1]: Started cri-containerd-88ceba7ca898b461c13890ee172d0d61bdd5ef2e2f8626fea0d102685b9d4a32.scope - libcontainer container 88ceba7ca898b461c13890ee172d0d61bdd5ef2e2f8626fea0d102685b9d4a32. May 27 03:55:30.287278 containerd[1570]: time="2025-05-27T03:55:30.287136117Z" level=info msg="StartContainer for \"88ceba7ca898b461c13890ee172d0d61bdd5ef2e2f8626fea0d102685b9d4a32\" returns successfully" May 27 03:55:30.361777 kubelet[2692]: E0527 03:55:30.361458 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:55:30.374431 kubelet[2692]: I0527 03:55:30.374325 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2gjrc" podStartSLOduration=3.374312331 podStartE2EDuration="3.374312331s" podCreationTimestamp="2025-05-27 03:55:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:55:28.799163372 +0000 UTC m=+7.146982448" watchObservedRunningTime="2025-05-27 03:55:30.374312331 +0000 UTC m=+8.722131407" May 27 03:55:30.793229 kubelet[2692]: E0527 03:55:30.792378 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:55:30.814915 kubelet[2692]: I0527 03:55:30.814861 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-844669ff44-85gv6" podStartSLOduration=1.09046047 podStartE2EDuration="2.814846981s" podCreationTimestamp="2025-05-27 03:55:28 +0000 UTC" firstStartedPulling="2025-05-27 03:55:28.487368708 +0000 UTC m=+6.835187784" lastFinishedPulling="2025-05-27 03:55:30.211755209 +0000 UTC m=+8.559574295" observedRunningTime="2025-05-27 03:55:30.807593962 +0000 UTC m=+9.155413038" watchObservedRunningTime="2025-05-27 03:55:30.814846981 +0000 UTC m=+9.162666057" May 27 03:55:30.880512 kubelet[2692]: E0527 03:55:30.880477 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:55:31.796119 kubelet[2692]: E0527 03:55:31.795369 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:55:31.796611 kubelet[2692]: E0527 03:55:31.796088 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:55:32.798210 kubelet[2692]: E0527 03:55:32.798147 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:55:35.723611 sudo[1815]: pam_unix(sudo:session): session closed for user root May 27 03:55:35.776913 sshd[1814]: Connection closed by 139.178.68.195 port 33970 May 27 03:55:35.777397 sshd-session[1812]: pam_unix(sshd:session): session closed for user core May 27 03:55:35.781840 systemd[1]: sshd@6-172.237.145.45:22-139.178.68.195:33970.service: Deactivated successfully. May 27 03:55:35.790164 systemd[1]: session-7.scope: Deactivated successfully. May 27 03:55:35.790732 systemd[1]: session-7.scope: Consumed 4.495s CPU time, 230.4M memory peak. May 27 03:55:35.793130 systemd-logind[1546]: Session 7 logged out. Waiting for processes to exit. May 27 03:55:35.795607 systemd-logind[1546]: Removed session 7. May 27 03:55:36.364559 kubelet[2692]: E0527 03:55:36.363839 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:55:36.831593 update_engine[1549]: I20250527 03:55:36.830241 1549 update_attempter.cc:509] Updating boot flags... May 27 03:55:38.898586 systemd[1]: Created slice kubepods-besteffort-pod0a4b1ada_5282_456b_b794_2e9fd87cde1d.slice - libcontainer container kubepods-besteffort-pod0a4b1ada_5282_456b_b794_2e9fd87cde1d.slice. May 27 03:55:38.962049 kubelet[2692]: I0527 03:55:38.961942 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a4b1ada-5282-456b-b794-2e9fd87cde1d-tigera-ca-bundle\") pod \"calico-typha-589fff64bf-t8qpf\" (UID: \"0a4b1ada-5282-456b-b794-2e9fd87cde1d\") " pod="calico-system/calico-typha-589fff64bf-t8qpf" May 27 03:55:38.962049 kubelet[2692]: I0527 03:55:38.961977 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8d2d\" (UniqueName: \"kubernetes.io/projected/0a4b1ada-5282-456b-b794-2e9fd87cde1d-kube-api-access-h8d2d\") pod \"calico-typha-589fff64bf-t8qpf\" (UID: \"0a4b1ada-5282-456b-b794-2e9fd87cde1d\") " pod="calico-system/calico-typha-589fff64bf-t8qpf" May 27 03:55:38.962049 kubelet[2692]: I0527 03:55:38.961995 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0a4b1ada-5282-456b-b794-2e9fd87cde1d-typha-certs\") pod \"calico-typha-589fff64bf-t8qpf\" (UID: \"0a4b1ada-5282-456b-b794-2e9fd87cde1d\") " pod="calico-system/calico-typha-589fff64bf-t8qpf" May 27 03:55:39.204173 kubelet[2692]: E0527 03:55:39.203439 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:55:39.206623 containerd[1570]: time="2025-05-27T03:55:39.206377923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-589fff64bf-t8qpf,Uid:0a4b1ada-5282-456b-b794-2e9fd87cde1d,Namespace:calico-system,Attempt:0,}" May 27 03:55:39.238302 containerd[1570]: time="2025-05-27T03:55:39.238254718Z" level=info msg="connecting to shim 9db7031b476baad853ca5a3d6cfc91553ef63f70521737160612ff26e21fe8be" address="unix:///run/containerd/s/f0a69a6e0be0efd8a73e26e02987f1bad777e11144d57c22f7ae610a7118833d" namespace=k8s.io protocol=ttrpc version=3 May 27 03:55:39.265429 systemd[1]: Created slice kubepods-besteffort-podba7c10f8_3aee_43fc_a882_d2420297444a.slice - libcontainer container kubepods-besteffort-podba7c10f8_3aee_43fc_a882_d2420297444a.slice. May 27 03:55:39.289816 systemd[1]: Started cri-containerd-9db7031b476baad853ca5a3d6cfc91553ef63f70521737160612ff26e21fe8be.scope - libcontainer container 9db7031b476baad853ca5a3d6cfc91553ef63f70521737160612ff26e21fe8be. May 27 03:55:39.365786 kubelet[2692]: I0527 03:55:39.365642 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ba7c10f8-3aee-43fc-a882-d2420297444a-policysync\") pod \"calico-node-szq4j\" (UID: \"ba7c10f8-3aee-43fc-a882-d2420297444a\") " pod="calico-system/calico-node-szq4j" May 27 03:55:39.366105 kubelet[2692]: I0527 03:55:39.366054 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ba7c10f8-3aee-43fc-a882-d2420297444a-xtables-lock\") pod \"calico-node-szq4j\" (UID: \"ba7c10f8-3aee-43fc-a882-d2420297444a\") " pod="calico-system/calico-node-szq4j" May 27 03:55:39.366105 kubelet[2692]: I0527 03:55:39.366079 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ba7c10f8-3aee-43fc-a882-d2420297444a-cni-bin-dir\") pod \"calico-node-szq4j\" (UID: \"ba7c10f8-3aee-43fc-a882-d2420297444a\") " pod="calico-system/calico-node-szq4j" May 27 03:55:39.366999 kubelet[2692]: I0527 03:55:39.366946 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ba7c10f8-3aee-43fc-a882-d2420297444a-lib-modules\") pod \"calico-node-szq4j\" (UID: \"ba7c10f8-3aee-43fc-a882-d2420297444a\") " pod="calico-system/calico-node-szq4j" May 27 03:55:39.367133 kubelet[2692]: I0527 03:55:39.366972 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba7c10f8-3aee-43fc-a882-d2420297444a-tigera-ca-bundle\") pod \"calico-node-szq4j\" (UID: \"ba7c10f8-3aee-43fc-a882-d2420297444a\") " pod="calico-system/calico-node-szq4j" May 27 03:55:39.367133 kubelet[2692]: I0527 03:55:39.367104 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ba7c10f8-3aee-43fc-a882-d2420297444a-flexvol-driver-host\") pod \"calico-node-szq4j\" (UID: \"ba7c10f8-3aee-43fc-a882-d2420297444a\") " pod="calico-system/calico-node-szq4j" May 27 03:55:39.367295 kubelet[2692]: I0527 03:55:39.367247 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4p68\" (UniqueName: \"kubernetes.io/projected/ba7c10f8-3aee-43fc-a882-d2420297444a-kube-api-access-r4p68\") pod \"calico-node-szq4j\" (UID: \"ba7c10f8-3aee-43fc-a882-d2420297444a\") " pod="calico-system/calico-node-szq4j" May 27 03:55:39.367425 kubelet[2692]: I0527 03:55:39.367268 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ba7c10f8-3aee-43fc-a882-d2420297444a-var-lib-calico\") pod \"calico-node-szq4j\" (UID: \"ba7c10f8-3aee-43fc-a882-d2420297444a\") " pod="calico-system/calico-node-szq4j" May 27 03:55:39.367425 kubelet[2692]: I0527 03:55:39.367367 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ba7c10f8-3aee-43fc-a882-d2420297444a-cni-net-dir\") pod \"calico-node-szq4j\" (UID: \"ba7c10f8-3aee-43fc-a882-d2420297444a\") " pod="calico-system/calico-node-szq4j" May 27 03:55:39.367425 kubelet[2692]: I0527 03:55:39.367387 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ba7c10f8-3aee-43fc-a882-d2420297444a-node-certs\") pod \"calico-node-szq4j\" (UID: \"ba7c10f8-3aee-43fc-a882-d2420297444a\") " pod="calico-system/calico-node-szq4j" May 27 03:55:39.367637 kubelet[2692]: I0527 03:55:39.367588 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ba7c10f8-3aee-43fc-a882-d2420297444a-cni-log-dir\") pod \"calico-node-szq4j\" (UID: \"ba7c10f8-3aee-43fc-a882-d2420297444a\") " pod="calico-system/calico-node-szq4j" May 27 03:55:39.367637 kubelet[2692]: I0527 03:55:39.367612 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ba7c10f8-3aee-43fc-a882-d2420297444a-var-run-calico\") pod \"calico-node-szq4j\" (UID: \"ba7c10f8-3aee-43fc-a882-d2420297444a\") " pod="calico-system/calico-node-szq4j" May 27 03:55:39.439510 containerd[1570]: time="2025-05-27T03:55:39.439419255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-589fff64bf-t8qpf,Uid:0a4b1ada-5282-456b-b794-2e9fd87cde1d,Namespace:calico-system,Attempt:0,} returns sandbox id \"9db7031b476baad853ca5a3d6cfc91553ef63f70521737160612ff26e21fe8be\"" May 27 03:55:39.440431 kubelet[2692]: E0527 03:55:39.440406 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:55:39.441629 containerd[1570]: time="2025-05-27T03:55:39.441483006Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 27 03:55:39.489779 kubelet[2692]: E0527 03:55:39.487795 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.489779 kubelet[2692]: W0527 03:55:39.487814 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.489779 kubelet[2692]: E0527 03:55:39.487867 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.494747 kubelet[2692]: E0527 03:55:39.494723 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.494747 kubelet[2692]: W0527 03:55:39.494740 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.494747 kubelet[2692]: E0527 03:55:39.494750 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.509949 kubelet[2692]: E0527 03:55:39.509903 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pc498" podUID="974f6c14-b459-4b2b-89e5-34bfda1490bf" May 27 03:55:39.541482 kubelet[2692]: E0527 03:55:39.541426 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.541482 kubelet[2692]: W0527 03:55:39.541465 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.541482 kubelet[2692]: E0527 03:55:39.541480 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.541751 kubelet[2692]: E0527 03:55:39.541734 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.541751 kubelet[2692]: W0527 03:55:39.541747 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.541797 kubelet[2692]: E0527 03:55:39.541776 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.542002 kubelet[2692]: E0527 03:55:39.541976 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.542002 kubelet[2692]: W0527 03:55:39.541989 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.542002 kubelet[2692]: E0527 03:55:39.541997 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.542298 kubelet[2692]: E0527 03:55:39.542280 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.542298 kubelet[2692]: W0527 03:55:39.542292 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.542298 kubelet[2692]: E0527 03:55:39.542300 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.542519 kubelet[2692]: E0527 03:55:39.542493 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.542519 kubelet[2692]: W0527 03:55:39.542514 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.542563 kubelet[2692]: E0527 03:55:39.542522 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.542719 kubelet[2692]: E0527 03:55:39.542702 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.542719 kubelet[2692]: W0527 03:55:39.542714 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.542765 kubelet[2692]: E0527 03:55:39.542722 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.542926 kubelet[2692]: E0527 03:55:39.542900 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.542926 kubelet[2692]: W0527 03:55:39.542921 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.542967 kubelet[2692]: E0527 03:55:39.542929 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.543208 kubelet[2692]: E0527 03:55:39.543146 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.543208 kubelet[2692]: W0527 03:55:39.543171 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.543300 kubelet[2692]: E0527 03:55:39.543286 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.543559 kubelet[2692]: E0527 03:55:39.543546 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.543695 kubelet[2692]: W0527 03:55:39.543601 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.543695 kubelet[2692]: E0527 03:55:39.543613 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.543807 kubelet[2692]: E0527 03:55:39.543797 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.543858 kubelet[2692]: W0527 03:55:39.543848 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.543904 kubelet[2692]: E0527 03:55:39.543895 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.544333 kubelet[2692]: E0527 03:55:39.544224 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.544333 kubelet[2692]: W0527 03:55:39.544236 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.544333 kubelet[2692]: E0527 03:55:39.544246 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.544486 kubelet[2692]: E0527 03:55:39.544476 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.544536 kubelet[2692]: W0527 03:55:39.544526 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.544624 kubelet[2692]: E0527 03:55:39.544580 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.544826 kubelet[2692]: E0527 03:55:39.544790 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.544933 kubelet[2692]: W0527 03:55:39.544886 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.544933 kubelet[2692]: E0527 03:55:39.544896 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.545287 kubelet[2692]: E0527 03:55:39.545234 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.545287 kubelet[2692]: W0527 03:55:39.545245 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.545287 kubelet[2692]: E0527 03:55:39.545254 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.545720 kubelet[2692]: E0527 03:55:39.545656 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.545720 kubelet[2692]: W0527 03:55:39.545667 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.545720 kubelet[2692]: E0527 03:55:39.545675 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.546326 kubelet[2692]: E0527 03:55:39.546257 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.546326 kubelet[2692]: W0527 03:55:39.546269 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.546326 kubelet[2692]: E0527 03:55:39.546278 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.546612 kubelet[2692]: E0527 03:55:39.546592 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.546678 kubelet[2692]: W0527 03:55:39.546667 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.546720 kubelet[2692]: E0527 03:55:39.546711 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.546980 kubelet[2692]: E0527 03:55:39.546919 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.546980 kubelet[2692]: W0527 03:55:39.546958 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.547129 kubelet[2692]: E0527 03:55:39.547036 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.547442 kubelet[2692]: E0527 03:55:39.547431 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.547572 kubelet[2692]: W0527 03:55:39.547500 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.547572 kubelet[2692]: E0527 03:55:39.547514 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.548131 kubelet[2692]: E0527 03:55:39.547987 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.548131 kubelet[2692]: W0527 03:55:39.548008 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.548131 kubelet[2692]: E0527 03:55:39.548017 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.568521 kubelet[2692]: E0527 03:55:39.568396 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.568521 kubelet[2692]: W0527 03:55:39.568408 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.568521 kubelet[2692]: E0527 03:55:39.568418 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.568521 kubelet[2692]: I0527 03:55:39.568444 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/974f6c14-b459-4b2b-89e5-34bfda1490bf-registration-dir\") pod \"csi-node-driver-pc498\" (UID: \"974f6c14-b459-4b2b-89e5-34bfda1490bf\") " pod="calico-system/csi-node-driver-pc498" May 27 03:55:39.568759 kubelet[2692]: E0527 03:55:39.568738 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.568869 kubelet[2692]: W0527 03:55:39.568802 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.568869 kubelet[2692]: E0527 03:55:39.568815 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.568869 kubelet[2692]: I0527 03:55:39.568837 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/974f6c14-b459-4b2b-89e5-34bfda1490bf-socket-dir\") pod \"csi-node-driver-pc498\" (UID: \"974f6c14-b459-4b2b-89e5-34bfda1490bf\") " pod="calico-system/csi-node-driver-pc498" May 27 03:55:39.569069 kubelet[2692]: E0527 03:55:39.569044 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.569069 kubelet[2692]: W0527 03:55:39.569064 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.569126 kubelet[2692]: E0527 03:55:39.569076 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.569306 kubelet[2692]: E0527 03:55:39.569290 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.569306 kubelet[2692]: W0527 03:55:39.569302 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.569351 kubelet[2692]: E0527 03:55:39.569311 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.570089 kubelet[2692]: E0527 03:55:39.569821 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.570089 kubelet[2692]: W0527 03:55:39.569834 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.570089 kubelet[2692]: E0527 03:55:39.569842 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.570089 kubelet[2692]: I0527 03:55:39.569866 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/974f6c14-b459-4b2b-89e5-34bfda1490bf-kubelet-dir\") pod \"csi-node-driver-pc498\" (UID: \"974f6c14-b459-4b2b-89e5-34bfda1490bf\") " pod="calico-system/csi-node-driver-pc498" May 27 03:55:39.570089 kubelet[2692]: E0527 03:55:39.570021 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.570089 kubelet[2692]: W0527 03:55:39.570029 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.570089 kubelet[2692]: E0527 03:55:39.570058 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.570281 containerd[1570]: time="2025-05-27T03:55:39.569852478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-szq4j,Uid:ba7c10f8-3aee-43fc-a882-d2420297444a,Namespace:calico-system,Attempt:0,}" May 27 03:55:39.570311 kubelet[2692]: I0527 03:55:39.570228 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm5tv\" (UniqueName: \"kubernetes.io/projected/974f6c14-b459-4b2b-89e5-34bfda1490bf-kube-api-access-xm5tv\") pod \"csi-node-driver-pc498\" (UID: \"974f6c14-b459-4b2b-89e5-34bfda1490bf\") " pod="calico-system/csi-node-driver-pc498" May 27 03:55:39.570311 kubelet[2692]: E0527 03:55:39.570287 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.570311 kubelet[2692]: W0527 03:55:39.570293 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.570311 kubelet[2692]: E0527 03:55:39.570301 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.570548 kubelet[2692]: E0527 03:55:39.570520 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.570548 kubelet[2692]: W0527 03:55:39.570533 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.570548 kubelet[2692]: E0527 03:55:39.570541 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.570751 kubelet[2692]: E0527 03:55:39.570734 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.570751 kubelet[2692]: W0527 03:55:39.570746 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.570812 kubelet[2692]: E0527 03:55:39.570755 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.570812 kubelet[2692]: I0527 03:55:39.570782 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/974f6c14-b459-4b2b-89e5-34bfda1490bf-varrun\") pod \"csi-node-driver-pc498\" (UID: \"974f6c14-b459-4b2b-89e5-34bfda1490bf\") " pod="calico-system/csi-node-driver-pc498" May 27 03:55:39.571055 kubelet[2692]: E0527 03:55:39.571037 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.571055 kubelet[2692]: W0527 03:55:39.571051 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.571055 kubelet[2692]: E0527 03:55:39.571059 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.571387 kubelet[2692]: E0527 03:55:39.571311 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.571387 kubelet[2692]: W0527 03:55:39.571319 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.571387 kubelet[2692]: E0527 03:55:39.571326 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.571569 kubelet[2692]: E0527 03:55:39.571529 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.571569 kubelet[2692]: W0527 03:55:39.571541 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.571569 kubelet[2692]: E0527 03:55:39.571548 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.571802 kubelet[2692]: E0527 03:55:39.571785 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.571802 kubelet[2692]: W0527 03:55:39.571798 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.571802 kubelet[2692]: E0527 03:55:39.571806 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.572083 kubelet[2692]: E0527 03:55:39.572067 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.572083 kubelet[2692]: W0527 03:55:39.572079 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.572158 kubelet[2692]: E0527 03:55:39.572087 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.572467 kubelet[2692]: E0527 03:55:39.572437 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.572467 kubelet[2692]: W0527 03:55:39.572451 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.572467 kubelet[2692]: E0527 03:55:39.572460 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.589474 containerd[1570]: time="2025-05-27T03:55:39.585868466Z" level=info msg="connecting to shim 801453188ceb7cda0b11bd7a5fb36095b68979c28145bb12a8a10739eb842879" address="unix:///run/containerd/s/f002b9e5bf5b33286cc16798bd0e4fff4ea2a3be0c2e44c5a0f50a1c7a51e905" namespace=k8s.io protocol=ttrpc version=3 May 27 03:55:39.620325 systemd[1]: Started cri-containerd-801453188ceb7cda0b11bd7a5fb36095b68979c28145bb12a8a10739eb842879.scope - libcontainer container 801453188ceb7cda0b11bd7a5fb36095b68979c28145bb12a8a10739eb842879. May 27 03:55:39.650571 containerd[1570]: time="2025-05-27T03:55:39.650542947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-szq4j,Uid:ba7c10f8-3aee-43fc-a882-d2420297444a,Namespace:calico-system,Attempt:0,} returns sandbox id \"801453188ceb7cda0b11bd7a5fb36095b68979c28145bb12a8a10739eb842879\"" May 27 03:55:39.672989 kubelet[2692]: E0527 03:55:39.672972 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.673084 kubelet[2692]: W0527 03:55:39.673070 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.673146 kubelet[2692]: E0527 03:55:39.673135 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.673414 kubelet[2692]: E0527 03:55:39.673402 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.673474 kubelet[2692]: W0527 03:55:39.673463 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.673535 kubelet[2692]: E0527 03:55:39.673519 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.673876 kubelet[2692]: E0527 03:55:39.673864 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.673928 kubelet[2692]: W0527 03:55:39.673918 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.673970 kubelet[2692]: E0527 03:55:39.673961 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.674301 kubelet[2692]: E0527 03:55:39.674266 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.674351 kubelet[2692]: W0527 03:55:39.674304 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.674351 kubelet[2692]: E0527 03:55:39.674326 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.674588 kubelet[2692]: E0527 03:55:39.674580 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.674588 kubelet[2692]: W0527 03:55:39.674588 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.674659 kubelet[2692]: E0527 03:55:39.674596 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.674811 kubelet[2692]: E0527 03:55:39.674786 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.674811 kubelet[2692]: W0527 03:55:39.674802 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.674811 kubelet[2692]: E0527 03:55:39.674810 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.675085 kubelet[2692]: E0527 03:55:39.675065 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.675128 kubelet[2692]: W0527 03:55:39.675088 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.675128 kubelet[2692]: E0527 03:55:39.675097 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.675340 kubelet[2692]: E0527 03:55:39.675308 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.675340 kubelet[2692]: W0527 03:55:39.675316 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.675340 kubelet[2692]: E0527 03:55:39.675323 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.675537 kubelet[2692]: E0527 03:55:39.675519 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.675537 kubelet[2692]: W0527 03:55:39.675530 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.675537 kubelet[2692]: E0527 03:55:39.675538 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.675823 kubelet[2692]: E0527 03:55:39.675805 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.675823 kubelet[2692]: W0527 03:55:39.675817 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.675823 kubelet[2692]: E0527 03:55:39.675824 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.676043 kubelet[2692]: E0527 03:55:39.676028 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.676043 kubelet[2692]: W0527 03:55:39.676039 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.676091 kubelet[2692]: E0527 03:55:39.676047 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.676326 kubelet[2692]: E0527 03:55:39.676311 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.676326 kubelet[2692]: W0527 03:55:39.676322 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.676384 kubelet[2692]: E0527 03:55:39.676331 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.676694 kubelet[2692]: E0527 03:55:39.676620 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.676694 kubelet[2692]: W0527 03:55:39.676633 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.676694 kubelet[2692]: E0527 03:55:39.676643 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.676890 kubelet[2692]: E0527 03:55:39.676873 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.676890 kubelet[2692]: W0527 03:55:39.676885 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.677095 kubelet[2692]: E0527 03:55:39.676893 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.677095 kubelet[2692]: E0527 03:55:39.677071 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.677095 kubelet[2692]: W0527 03:55:39.677078 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.677095 kubelet[2692]: E0527 03:55:39.677085 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.677333 kubelet[2692]: E0527 03:55:39.677321 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.677410 kubelet[2692]: W0527 03:55:39.677374 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.677410 kubelet[2692]: E0527 03:55:39.677391 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.677716 kubelet[2692]: E0527 03:55:39.677705 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.677781 kubelet[2692]: W0527 03:55:39.677758 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.677781 kubelet[2692]: E0527 03:55:39.677770 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.678068 kubelet[2692]: E0527 03:55:39.678057 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.678132 kubelet[2692]: W0527 03:55:39.678110 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.678132 kubelet[2692]: E0527 03:55:39.678122 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.678472 kubelet[2692]: E0527 03:55:39.678442 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.678472 kubelet[2692]: W0527 03:55:39.678453 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.678472 kubelet[2692]: E0527 03:55:39.678461 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.678845 kubelet[2692]: E0527 03:55:39.678759 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.678845 kubelet[2692]: W0527 03:55:39.678770 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.678845 kubelet[2692]: E0527 03:55:39.678832 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.679332 kubelet[2692]: E0527 03:55:39.679302 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.679332 kubelet[2692]: W0527 03:55:39.679312 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.679332 kubelet[2692]: E0527 03:55:39.679320 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.679633 kubelet[2692]: E0527 03:55:39.679603 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.679633 kubelet[2692]: W0527 03:55:39.679613 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.679633 kubelet[2692]: E0527 03:55:39.679622 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.679947 kubelet[2692]: E0527 03:55:39.679937 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.680014 kubelet[2692]: W0527 03:55:39.679992 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.680014 kubelet[2692]: E0527 03:55:39.680003 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.680376 kubelet[2692]: E0527 03:55:39.680344 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.680376 kubelet[2692]: W0527 03:55:39.680355 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.680376 kubelet[2692]: E0527 03:55:39.680364 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.680692 kubelet[2692]: E0527 03:55:39.680658 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.680692 kubelet[2692]: W0527 03:55:39.680669 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.680692 kubelet[2692]: E0527 03:55:39.680676 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:39.686440 kubelet[2692]: E0527 03:55:39.686372 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:39.686440 kubelet[2692]: W0527 03:55:39.686429 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:39.686440 kubelet[2692]: E0527 03:55:39.686440 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:40.106624 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2411975397.mount: Deactivated successfully. May 27 03:55:40.592859 containerd[1570]: time="2025-05-27T03:55:40.592820835Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:40.593953 containerd[1570]: time="2025-05-27T03:55:40.593923726Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=35158669" May 27 03:55:40.595411 containerd[1570]: time="2025-05-27T03:55:40.594920286Z" level=info msg="ImageCreate event name:\"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:40.601899 containerd[1570]: time="2025-05-27T03:55:40.601609112Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:40.602699 containerd[1570]: time="2025-05-27T03:55:40.602679272Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"35158523\" in 1.161042294s" May 27 03:55:40.602763 containerd[1570]: time="2025-05-27T03:55:40.602750482Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\"" May 27 03:55:40.605002 containerd[1570]: time="2025-05-27T03:55:40.604942194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 27 03:55:40.623852 containerd[1570]: time="2025-05-27T03:55:40.623829410Z" level=info msg="CreateContainer within sandbox \"9db7031b476baad853ca5a3d6cfc91553ef63f70521737160612ff26e21fe8be\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 27 03:55:40.635137 containerd[1570]: time="2025-05-27T03:55:40.631761448Z" level=info msg="Container 81d4a66b96042094bf6d36abda88bbb6bde425b9a05c5c256c6d92ccc19fbf68: CDI devices from CRI Config.CDIDevices: []" May 27 03:55:40.638525 containerd[1570]: time="2025-05-27T03:55:40.638489094Z" level=info msg="CreateContainer within sandbox \"9db7031b476baad853ca5a3d6cfc91553ef63f70521737160612ff26e21fe8be\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"81d4a66b96042094bf6d36abda88bbb6bde425b9a05c5c256c6d92ccc19fbf68\"" May 27 03:55:40.638935 containerd[1570]: time="2025-05-27T03:55:40.638912818Z" level=info msg="StartContainer for \"81d4a66b96042094bf6d36abda88bbb6bde425b9a05c5c256c6d92ccc19fbf68\"" May 27 03:55:40.640423 containerd[1570]: time="2025-05-27T03:55:40.640387142Z" level=info msg="connecting to shim 81d4a66b96042094bf6d36abda88bbb6bde425b9a05c5c256c6d92ccc19fbf68" address="unix:///run/containerd/s/f0a69a6e0be0efd8a73e26e02987f1bad777e11144d57c22f7ae610a7118833d" protocol=ttrpc version=3 May 27 03:55:40.668325 systemd[1]: Started cri-containerd-81d4a66b96042094bf6d36abda88bbb6bde425b9a05c5c256c6d92ccc19fbf68.scope - libcontainer container 81d4a66b96042094bf6d36abda88bbb6bde425b9a05c5c256c6d92ccc19fbf68. May 27 03:55:40.719812 containerd[1570]: time="2025-05-27T03:55:40.719735694Z" level=info msg="StartContainer for \"81d4a66b96042094bf6d36abda88bbb6bde425b9a05c5c256c6d92ccc19fbf68\" returns successfully" May 27 03:55:40.818734 kubelet[2692]: E0527 03:55:40.818291 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:55:40.835763 kubelet[2692]: I0527 03:55:40.835695 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-589fff64bf-t8qpf" podStartSLOduration=1.6726926500000001 podStartE2EDuration="2.835665834s" podCreationTimestamp="2025-05-27 03:55:38 +0000 UTC" firstStartedPulling="2025-05-27 03:55:39.441171513 +0000 UTC m=+17.788990599" lastFinishedPulling="2025-05-27 03:55:40.604144707 +0000 UTC m=+18.951963783" observedRunningTime="2025-05-27 03:55:40.834997737 +0000 UTC m=+19.182816813" watchObservedRunningTime="2025-05-27 03:55:40.835665834 +0000 UTC m=+19.183484910" May 27 03:55:40.857215 kubelet[2692]: E0527 03:55:40.855526 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:40.857215 kubelet[2692]: W0527 03:55:40.855644 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:40.857215 kubelet[2692]: E0527 03:55:40.855663 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:40.857770 kubelet[2692]: E0527 03:55:40.857551 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:40.857770 kubelet[2692]: W0527 03:55:40.857562 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:40.858035 kubelet[2692]: E0527 03:55:40.858021 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:40.858284 kubelet[2692]: E0527 03:55:40.858272 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:40.858356 kubelet[2692]: W0527 03:55:40.858345 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:40.858418 kubelet[2692]: E0527 03:55:40.858391 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:40.858856 kubelet[2692]: E0527 03:55:40.858804 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:40.858856 kubelet[2692]: W0527 03:55:40.858811 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:40.858856 kubelet[2692]: E0527 03:55:40.858819 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:40.859157 kubelet[2692]: E0527 03:55:40.859117 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:40.859157 kubelet[2692]: W0527 03:55:40.859126 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:40.859157 kubelet[2692]: E0527 03:55:40.859134 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:40.859481 kubelet[2692]: E0527 03:55:40.859441 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:40.859481 kubelet[2692]: W0527 03:55:40.859451 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:40.859481 kubelet[2692]: E0527 03:55:40.859460 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:40.859783 kubelet[2692]: E0527 03:55:40.859743 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:40.859783 kubelet[2692]: W0527 03:55:40.859752 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:40.859783 kubelet[2692]: E0527 03:55:40.859760 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:40.860105 kubelet[2692]: E0527 03:55:40.860069 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:40.860105 kubelet[2692]: W0527 03:55:40.860080 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:40.861218 kubelet[2692]: E0527 03:55:40.860088 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:40.861431 kubelet[2692]: E0527 03:55:40.861405 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:40.861509 kubelet[2692]: W0527 03:55:40.861417 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:40.861597 kubelet[2692]: E0527 03:55:40.861585 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:40.861947 kubelet[2692]: E0527 03:55:40.861896 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:40.861947 kubelet[2692]: W0527 03:55:40.861905 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:40.861947 kubelet[2692]: E0527 03:55:40.861913 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:40.862357 kubelet[2692]: E0527 03:55:40.862260 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:40.862357 kubelet[2692]: W0527 03:55:40.862269 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:40.862357 kubelet[2692]: E0527 03:55:40.862277 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:40.862538 kubelet[2692]: E0527 03:55:40.862505 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:40.862538 kubelet[2692]: W0527 03:55:40.862515 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:40.862646 kubelet[2692]: E0527 03:55:40.862523 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:40.862888 kubelet[2692]: E0527 03:55:40.862847 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:40.862888 kubelet[2692]: W0527 03:55:40.862860 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:40.862888 kubelet[2692]: E0527 03:55:40.862868 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:40.863459 kubelet[2692]: E0527 03:55:40.863398 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:40.863459 kubelet[2692]: W0527 03:55:40.863409 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:40.863459 kubelet[2692]: E0527 03:55:40.863417 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:40.863934 kubelet[2692]: E0527 03:55:40.863833 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:40.863934 kubelet[2692]: W0527 03:55:40.863844 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:40.863934 kubelet[2692]: E0527 03:55:40.863853 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:40.884845 kubelet[2692]: E0527 03:55:40.884801 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:40.884845 kubelet[2692]: W0527 03:55:40.884817 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:40.884845 kubelet[2692]: E0527 03:55:40.884828 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:40.885356 kubelet[2692]: E0527 03:55:40.885326 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:40.885356 kubelet[2692]: W0527 03:55:40.885336 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:40.885356 kubelet[2692]: E0527 03:55:40.885345 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:40.885763 kubelet[2692]: E0527 03:55:40.885726 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:40.885763 kubelet[2692]: W0527 03:55:40.885736 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:40.885763 kubelet[2692]: E0527 03:55:40.885744 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:40.886390 kubelet[2692]: E0527 03:55:40.886358 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:40.886390 kubelet[2692]: W0527 03:55:40.886369 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:40.886390 kubelet[2692]: E0527 03:55:40.886378 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:40.888343 kubelet[2692]: E0527 03:55:40.888330 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:40.888446 kubelet[2692]: W0527 03:55:40.888420 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:40.888446 kubelet[2692]: E0527 03:55:40.888435 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:40.888794 kubelet[2692]: E0527 03:55:40.888771 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:40.888959 kubelet[2692]: W0527 03:55:40.888945 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:40.889140 kubelet[2692]: E0527 03:55:40.889067 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:40.890097 kubelet[2692]: E0527 03:55:40.890085 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:40.890155 kubelet[2692]: W0527 03:55:40.890145 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:40.890224 kubelet[2692]: E0527 03:55:40.890213 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:40.890963 kubelet[2692]: E0527 03:55:40.890929 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:40.890963 kubelet[2692]: W0527 03:55:40.890941 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:40.890963 kubelet[2692]: E0527 03:55:40.890950 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:40.891702 kubelet[2692]: E0527 03:55:40.891464 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:40.891702 kubelet[2692]: W0527 03:55:40.891476 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:40.891702 kubelet[2692]: E0527 03:55:40.891485 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:40.892213 kubelet[2692]: E0527 03:55:40.892144 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:40.892213 kubelet[2692]: W0527 03:55:40.892155 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:40.892213 kubelet[2692]: E0527 03:55:40.892164 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:40.893559 kubelet[2692]: E0527 03:55:40.893518 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:40.893600 kubelet[2692]: W0527 03:55:40.893563 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:40.893600 kubelet[2692]: E0527 03:55:40.893589 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:40.894005 kubelet[2692]: E0527 03:55:40.893979 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:40.894005 kubelet[2692]: W0527 03:55:40.893995 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:40.894005 kubelet[2692]: E0527 03:55:40.894004 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:40.894247 kubelet[2692]: E0527 03:55:40.894180 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:40.894247 kubelet[2692]: W0527 03:55:40.894223 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:40.894247 kubelet[2692]: E0527 03:55:40.894231 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:40.894605 kubelet[2692]: E0527 03:55:40.894577 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:40.894605 kubelet[2692]: W0527 03:55:40.894594 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:40.894605 kubelet[2692]: E0527 03:55:40.894603 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:40.894850 kubelet[2692]: E0527 03:55:40.894821 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:40.894850 kubelet[2692]: W0527 03:55:40.894837 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:40.894850 kubelet[2692]: E0527 03:55:40.894845 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:40.895290 kubelet[2692]: E0527 03:55:40.895260 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:40.895290 kubelet[2692]: W0527 03:55:40.895272 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:40.895290 kubelet[2692]: E0527 03:55:40.895282 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:40.896268 kubelet[2692]: E0527 03:55:40.896242 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:40.896268 kubelet[2692]: W0527 03:55:40.896259 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:40.896268 kubelet[2692]: E0527 03:55:40.896268 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:40.896454 kubelet[2692]: E0527 03:55:40.896432 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:55:40.896454 kubelet[2692]: W0527 03:55:40.896447 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:55:40.896454 kubelet[2692]: E0527 03:55:40.896455 2692 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:55:41.318758 containerd[1570]: time="2025-05-27T03:55:41.318716548Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:41.319396 containerd[1570]: time="2025-05-27T03:55:41.319371875Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=4441619" May 27 03:55:41.320075 containerd[1570]: time="2025-05-27T03:55:41.320023761Z" level=info msg="ImageCreate event name:\"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:41.321285 containerd[1570]: time="2025-05-27T03:55:41.321248432Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:41.321889 containerd[1570]: time="2025-05-27T03:55:41.321746587Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5934282\" in 716.661721ms" May 27 03:55:41.321889 containerd[1570]: time="2025-05-27T03:55:41.321778887Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\"" May 27 03:55:41.325633 containerd[1570]: time="2025-05-27T03:55:41.325595762Z" level=info msg="CreateContainer within sandbox \"801453188ceb7cda0b11bd7a5fb36095b68979c28145bb12a8a10739eb842879\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 27 03:55:41.331758 containerd[1570]: time="2025-05-27T03:55:41.331734948Z" level=info msg="Container 681d98dbdce6fa4261c682f67f6df4bdff1b37d5aab4d83a58fa171144ad924b: CDI devices from CRI Config.CDIDevices: []" May 27 03:55:41.339522 containerd[1570]: time="2025-05-27T03:55:41.339440279Z" level=info msg="CreateContainer within sandbox \"801453188ceb7cda0b11bd7a5fb36095b68979c28145bb12a8a10739eb842879\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"681d98dbdce6fa4261c682f67f6df4bdff1b37d5aab4d83a58fa171144ad924b\"" May 27 03:55:41.340382 containerd[1570]: time="2025-05-27T03:55:41.340128506Z" level=info msg="StartContainer for \"681d98dbdce6fa4261c682f67f6df4bdff1b37d5aab4d83a58fa171144ad924b\"" May 27 03:55:41.341782 containerd[1570]: time="2025-05-27T03:55:41.341743330Z" level=info msg="connecting to shim 681d98dbdce6fa4261c682f67f6df4bdff1b37d5aab4d83a58fa171144ad924b" address="unix:///run/containerd/s/f002b9e5bf5b33286cc16798bd0e4fff4ea2a3be0c2e44c5a0f50a1c7a51e905" protocol=ttrpc version=3 May 27 03:55:41.364333 systemd[1]: Started cri-containerd-681d98dbdce6fa4261c682f67f6df4bdff1b37d5aab4d83a58fa171144ad924b.scope - libcontainer container 681d98dbdce6fa4261c682f67f6df4bdff1b37d5aab4d83a58fa171144ad924b. May 27 03:55:41.410376 containerd[1570]: time="2025-05-27T03:55:41.410296220Z" level=info msg="StartContainer for \"681d98dbdce6fa4261c682f67f6df4bdff1b37d5aab4d83a58fa171144ad924b\" returns successfully" May 27 03:55:41.426888 systemd[1]: cri-containerd-681d98dbdce6fa4261c682f67f6df4bdff1b37d5aab4d83a58fa171144ad924b.scope: Deactivated successfully. May 27 03:55:41.429878 containerd[1570]: time="2025-05-27T03:55:41.429853290Z" level=info msg="received exit event container_id:\"681d98dbdce6fa4261c682f67f6df4bdff1b37d5aab4d83a58fa171144ad924b\" id:\"681d98dbdce6fa4261c682f67f6df4bdff1b37d5aab4d83a58fa171144ad924b\" pid:3389 exited_at:{seconds:1748318141 nanos:429526227}" May 27 03:55:41.430119 containerd[1570]: time="2025-05-27T03:55:41.429911401Z" level=info msg="TaskExit event in podsandbox handler container_id:\"681d98dbdce6fa4261c682f67f6df4bdff1b37d5aab4d83a58fa171144ad924b\" id:\"681d98dbdce6fa4261c682f67f6df4bdff1b37d5aab4d83a58fa171144ad924b\" pid:3389 exited_at:{seconds:1748318141 nanos:429526227}" May 27 03:55:41.454887 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-681d98dbdce6fa4261c682f67f6df4bdff1b37d5aab4d83a58fa171144ad924b-rootfs.mount: Deactivated successfully. May 27 03:55:41.750704 kubelet[2692]: E0527 03:55:41.750060 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pc498" podUID="974f6c14-b459-4b2b-89e5-34bfda1490bf" May 27 03:55:41.822434 kubelet[2692]: I0527 03:55:41.821113 2692 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:55:41.822434 kubelet[2692]: E0527 03:55:41.821377 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:55:41.823892 containerd[1570]: time="2025-05-27T03:55:41.823864940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 27 03:55:43.419174 containerd[1570]: time="2025-05-27T03:55:43.419132273Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:43.419995 containerd[1570]: time="2025-05-27T03:55:43.419962609Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=70300568" May 27 03:55:43.421222 containerd[1570]: time="2025-05-27T03:55:43.421137719Z" level=info msg="ImageCreate event name:\"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:43.423109 containerd[1570]: time="2025-05-27T03:55:43.423086554Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:43.423789 containerd[1570]: time="2025-05-27T03:55:43.423479868Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"71793271\" in 1.598559988s" May 27 03:55:43.423789 containerd[1570]: time="2025-05-27T03:55:43.423511138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\"" May 27 03:55:43.427213 containerd[1570]: time="2025-05-27T03:55:43.427153257Z" level=info msg="CreateContainer within sandbox \"801453188ceb7cda0b11bd7a5fb36095b68979c28145bb12a8a10739eb842879\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 27 03:55:43.435685 containerd[1570]: time="2025-05-27T03:55:43.435331033Z" level=info msg="Container ad3171b435f6e4a0816c5ecdb43e8dbcf5b548c19ca5425d378e2c0f63ec6706: CDI devices from CRI Config.CDIDevices: []" May 27 03:55:43.446915 containerd[1570]: time="2025-05-27T03:55:43.446876265Z" level=info msg="CreateContainer within sandbox \"801453188ceb7cda0b11bd7a5fb36095b68979c28145bb12a8a10739eb842879\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ad3171b435f6e4a0816c5ecdb43e8dbcf5b548c19ca5425d378e2c0f63ec6706\"" May 27 03:55:43.447467 containerd[1570]: time="2025-05-27T03:55:43.447439460Z" level=info msg="StartContainer for \"ad3171b435f6e4a0816c5ecdb43e8dbcf5b548c19ca5425d378e2c0f63ec6706\"" May 27 03:55:43.449723 containerd[1570]: time="2025-05-27T03:55:43.449688708Z" level=info msg="connecting to shim ad3171b435f6e4a0816c5ecdb43e8dbcf5b548c19ca5425d378e2c0f63ec6706" address="unix:///run/containerd/s/f002b9e5bf5b33286cc16798bd0e4fff4ea2a3be0c2e44c5a0f50a1c7a51e905" protocol=ttrpc version=3 May 27 03:55:43.471325 systemd[1]: Started cri-containerd-ad3171b435f6e4a0816c5ecdb43e8dbcf5b548c19ca5425d378e2c0f63ec6706.scope - libcontainer container ad3171b435f6e4a0816c5ecdb43e8dbcf5b548c19ca5425d378e2c0f63ec6706. May 27 03:55:43.509893 containerd[1570]: time="2025-05-27T03:55:43.509858149Z" level=info msg="StartContainer for \"ad3171b435f6e4a0816c5ecdb43e8dbcf5b548c19ca5425d378e2c0f63ec6706\" returns successfully" May 27 03:55:43.750834 kubelet[2692]: E0527 03:55:43.750797 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pc498" podUID="974f6c14-b459-4b2b-89e5-34bfda1490bf" May 27 03:55:43.991748 containerd[1570]: time="2025-05-27T03:55:43.991607621Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 03:55:43.994497 systemd[1]: cri-containerd-ad3171b435f6e4a0816c5ecdb43e8dbcf5b548c19ca5425d378e2c0f63ec6706.scope: Deactivated successfully. May 27 03:55:43.994952 systemd[1]: cri-containerd-ad3171b435f6e4a0816c5ecdb43e8dbcf5b548c19ca5425d378e2c0f63ec6706.scope: Consumed 528ms CPU time, 195.4M memory peak, 170.9M written to disk. May 27 03:55:43.997623 containerd[1570]: time="2025-05-27T03:55:43.997554830Z" level=info msg="received exit event container_id:\"ad3171b435f6e4a0816c5ecdb43e8dbcf5b548c19ca5425d378e2c0f63ec6706\" id:\"ad3171b435f6e4a0816c5ecdb43e8dbcf5b548c19ca5425d378e2c0f63ec6706\" pid:3450 exited_at:{seconds:1748318143 nanos:996915454}" May 27 03:55:43.998034 containerd[1570]: time="2025-05-27T03:55:43.997987093Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ad3171b435f6e4a0816c5ecdb43e8dbcf5b548c19ca5425d378e2c0f63ec6706\" id:\"ad3171b435f6e4a0816c5ecdb43e8dbcf5b548c19ca5425d378e2c0f63ec6706\" pid:3450 exited_at:{seconds:1748318143 nanos:996915454}" May 27 03:55:44.019796 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad3171b435f6e4a0816c5ecdb43e8dbcf5b548c19ca5425d378e2c0f63ec6706-rootfs.mount: Deactivated successfully. May 27 03:55:44.069908 kubelet[2692]: I0527 03:55:44.069869 2692 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 27 03:55:44.111670 systemd[1]: Created slice kubepods-burstable-pod929ede96_a62c_46af_99bf_c380684a0a25.slice - libcontainer container kubepods-burstable-pod929ede96_a62c_46af_99bf_c380684a0a25.slice. May 27 03:55:44.125040 systemd[1]: Created slice kubepods-besteffort-podd147b7e2_07eb_4a3a_87d4_09d9a681d7e9.slice - libcontainer container kubepods-besteffort-podd147b7e2_07eb_4a3a_87d4_09d9a681d7e9.slice. May 27 03:55:44.142679 systemd[1]: Created slice kubepods-besteffort-podb12cf906_4be7_43b2_ab06_ceebfe9f484a.slice - libcontainer container kubepods-besteffort-podb12cf906_4be7_43b2_ab06_ceebfe9f484a.slice. May 27 03:55:44.151455 systemd[1]: Created slice kubepods-besteffort-pod305d83db_a282_47d8_aae7_29f765f39c27.slice - libcontainer container kubepods-besteffort-pod305d83db_a282_47d8_aae7_29f765f39c27.slice. May 27 03:55:44.166860 systemd[1]: Created slice kubepods-burstable-pod787e6dba_96d7_437a_a019_e1a3390ef888.slice - libcontainer container kubepods-burstable-pod787e6dba_96d7_437a_a019_e1a3390ef888.slice. May 27 03:55:44.179230 systemd[1]: Created slice kubepods-besteffort-pod4087922b_0418_477d_a845_ce56592933b9.slice - libcontainer container kubepods-besteffort-pod4087922b_0418_477d_a845_ce56592933b9.slice. May 27 03:55:44.186201 systemd[1]: Created slice kubepods-besteffort-pod5900b6a9_212e_4f83_ab2e_42e0e9d86137.slice - libcontainer container kubepods-besteffort-pod5900b6a9_212e_4f83_ab2e_42e0e9d86137.slice. May 27 03:55:44.206987 kubelet[2692]: I0527 03:55:44.206173 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvnlw\" (UniqueName: \"kubernetes.io/projected/5900b6a9-212e-4f83-ab2e-42e0e9d86137-kube-api-access-tvnlw\") pod \"calico-kube-controllers-68b896d46-g2wtg\" (UID: \"5900b6a9-212e-4f83-ab2e-42e0e9d86137\") " pod="calico-system/calico-kube-controllers-68b896d46-g2wtg" May 27 03:55:44.207088 kubelet[2692]: I0527 03:55:44.207064 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d147b7e2-07eb-4a3a-87d4-09d9a681d7e9-whisker-backend-key-pair\") pod \"whisker-75545d5bdf-bkh5l\" (UID: \"d147b7e2-07eb-4a3a-87d4-09d9a681d7e9\") " pod="calico-system/whisker-75545d5bdf-bkh5l" May 27 03:55:44.207119 kubelet[2692]: I0527 03:55:44.207094 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d147b7e2-07eb-4a3a-87d4-09d9a681d7e9-whisker-ca-bundle\") pod \"whisker-75545d5bdf-bkh5l\" (UID: \"d147b7e2-07eb-4a3a-87d4-09d9a681d7e9\") " pod="calico-system/whisker-75545d5bdf-bkh5l" May 27 03:55:44.207172 kubelet[2692]: I0527 03:55:44.207111 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/305d83db-a282-47d8-aae7-29f765f39c27-goldmane-ca-bundle\") pod \"goldmane-78d55f7ddc-n8xhd\" (UID: \"305d83db-a282-47d8-aae7-29f765f39c27\") " pod="calico-system/goldmane-78d55f7ddc-n8xhd" May 27 03:55:44.207435 kubelet[2692]: I0527 03:55:44.207408 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hxgn\" (UniqueName: \"kubernetes.io/projected/305d83db-a282-47d8-aae7-29f765f39c27-kube-api-access-8hxgn\") pod \"goldmane-78d55f7ddc-n8xhd\" (UID: \"305d83db-a282-47d8-aae7-29f765f39c27\") " pod="calico-system/goldmane-78d55f7ddc-n8xhd" May 27 03:55:44.207768 kubelet[2692]: I0527 03:55:44.207731 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b12cf906-4be7-43b2-ab06-ceebfe9f484a-calico-apiserver-certs\") pod \"calico-apiserver-f4f448c9d-6qqwn\" (UID: \"b12cf906-4be7-43b2-ab06-ceebfe9f484a\") " pod="calico-apiserver/calico-apiserver-f4f448c9d-6qqwn" May 27 03:55:44.207800 kubelet[2692]: I0527 03:55:44.207790 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/305d83db-a282-47d8-aae7-29f765f39c27-goldmane-key-pair\") pod \"goldmane-78d55f7ddc-n8xhd\" (UID: \"305d83db-a282-47d8-aae7-29f765f39c27\") " pod="calico-system/goldmane-78d55f7ddc-n8xhd" May 27 03:55:44.207855 kubelet[2692]: I0527 03:55:44.207808 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6l5p\" (UniqueName: \"kubernetes.io/projected/4087922b-0418-477d-a845-ce56592933b9-kube-api-access-s6l5p\") pod \"calico-apiserver-f4f448c9d-kp96l\" (UID: \"4087922b-0418-477d-a845-ce56592933b9\") " pod="calico-apiserver/calico-apiserver-f4f448c9d-kp96l" May 27 03:55:44.208063 kubelet[2692]: I0527 03:55:44.207826 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/787e6dba-96d7-437a-a019-e1a3390ef888-config-volume\") pod \"coredns-674b8bbfcf-xxzsn\" (UID: \"787e6dba-96d7-437a-a019-e1a3390ef888\") " pod="kube-system/coredns-674b8bbfcf-xxzsn" May 27 03:55:44.208507 kubelet[2692]: I0527 03:55:44.208137 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5900b6a9-212e-4f83-ab2e-42e0e9d86137-tigera-ca-bundle\") pod \"calico-kube-controllers-68b896d46-g2wtg\" (UID: \"5900b6a9-212e-4f83-ab2e-42e0e9d86137\") " pod="calico-system/calico-kube-controllers-68b896d46-g2wtg" May 27 03:55:44.208507 kubelet[2692]: I0527 03:55:44.208372 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2rqd\" (UniqueName: \"kubernetes.io/projected/929ede96-a62c-46af-99bf-c380684a0a25-kube-api-access-r2rqd\") pod \"coredns-674b8bbfcf-xpzxg\" (UID: \"929ede96-a62c-46af-99bf-c380684a0a25\") " pod="kube-system/coredns-674b8bbfcf-xpzxg" May 27 03:55:44.208507 kubelet[2692]: I0527 03:55:44.208391 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88596\" (UniqueName: \"kubernetes.io/projected/b12cf906-4be7-43b2-ab06-ceebfe9f484a-kube-api-access-88596\") pod \"calico-apiserver-f4f448c9d-6qqwn\" (UID: \"b12cf906-4be7-43b2-ab06-ceebfe9f484a\") " pod="calico-apiserver/calico-apiserver-f4f448c9d-6qqwn" May 27 03:55:44.208507 kubelet[2692]: I0527 03:55:44.208408 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4087922b-0418-477d-a845-ce56592933b9-calico-apiserver-certs\") pod \"calico-apiserver-f4f448c9d-kp96l\" (UID: \"4087922b-0418-477d-a845-ce56592933b9\") " pod="calico-apiserver/calico-apiserver-f4f448c9d-kp96l" May 27 03:55:44.208507 kubelet[2692]: I0527 03:55:44.208420 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcprf\" (UniqueName: \"kubernetes.io/projected/787e6dba-96d7-437a-a019-e1a3390ef888-kube-api-access-lcprf\") pod \"coredns-674b8bbfcf-xxzsn\" (UID: \"787e6dba-96d7-437a-a019-e1a3390ef888\") " pod="kube-system/coredns-674b8bbfcf-xxzsn" May 27 03:55:44.208674 kubelet[2692]: I0527 03:55:44.208641 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/305d83db-a282-47d8-aae7-29f765f39c27-config\") pod \"goldmane-78d55f7ddc-n8xhd\" (UID: \"305d83db-a282-47d8-aae7-29f765f39c27\") " pod="calico-system/goldmane-78d55f7ddc-n8xhd" May 27 03:55:44.209019 kubelet[2692]: I0527 03:55:44.208990 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq9fm\" (UniqueName: \"kubernetes.io/projected/d147b7e2-07eb-4a3a-87d4-09d9a681d7e9-kube-api-access-sq9fm\") pod \"whisker-75545d5bdf-bkh5l\" (UID: \"d147b7e2-07eb-4a3a-87d4-09d9a681d7e9\") " pod="calico-system/whisker-75545d5bdf-bkh5l" May 27 03:55:44.209284 kubelet[2692]: I0527 03:55:44.209246 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/929ede96-a62c-46af-99bf-c380684a0a25-config-volume\") pod \"coredns-674b8bbfcf-xpzxg\" (UID: \"929ede96-a62c-46af-99bf-c380684a0a25\") " pod="kube-system/coredns-674b8bbfcf-xpzxg" May 27 03:55:44.421489 kubelet[2692]: E0527 03:55:44.421378 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:55:44.421922 containerd[1570]: time="2025-05-27T03:55:44.421867636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xpzxg,Uid:929ede96-a62c-46af-99bf-c380684a0a25,Namespace:kube-system,Attempt:0,}" May 27 03:55:44.439222 containerd[1570]: time="2025-05-27T03:55:44.437949477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75545d5bdf-bkh5l,Uid:d147b7e2-07eb-4a3a-87d4-09d9a681d7e9,Namespace:calico-system,Attempt:0,}" May 27 03:55:44.458497 containerd[1570]: time="2025-05-27T03:55:44.458474820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4f448c9d-6qqwn,Uid:b12cf906-4be7-43b2-ab06-ceebfe9f484a,Namespace:calico-apiserver,Attempt:0,}" May 27 03:55:44.465059 containerd[1570]: time="2025-05-27T03:55:44.463566168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-n8xhd,Uid:305d83db-a282-47d8-aae7-29f765f39c27,Namespace:calico-system,Attempt:0,}" May 27 03:55:44.473305 kubelet[2692]: E0527 03:55:44.472081 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:55:44.474134 containerd[1570]: time="2025-05-27T03:55:44.474105367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xxzsn,Uid:787e6dba-96d7-437a-a019-e1a3390ef888,Namespace:kube-system,Attempt:0,}" May 27 03:55:44.487862 containerd[1570]: time="2025-05-27T03:55:44.487813850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4f448c9d-kp96l,Uid:4087922b-0418-477d-a845-ce56592933b9,Namespace:calico-apiserver,Attempt:0,}" May 27 03:55:44.494145 containerd[1570]: time="2025-05-27T03:55:44.493768974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68b896d46-g2wtg,Uid:5900b6a9-212e-4f83-ab2e-42e0e9d86137,Namespace:calico-system,Attempt:0,}" May 27 03:55:44.593529 containerd[1570]: time="2025-05-27T03:55:44.593488730Z" level=error msg="Failed to destroy network for sandbox \"418642a7e13a62873246918855b85fb2a340a793134302e8f91a8e01eb23da56\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:55:44.597163 containerd[1570]: time="2025-05-27T03:55:44.597130038Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75545d5bdf-bkh5l,Uid:d147b7e2-07eb-4a3a-87d4-09d9a681d7e9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"418642a7e13a62873246918855b85fb2a340a793134302e8f91a8e01eb23da56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:55:44.598072 kubelet[2692]: E0527 03:55:44.597950 2692 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"418642a7e13a62873246918855b85fb2a340a793134302e8f91a8e01eb23da56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:55:44.598072 kubelet[2692]: E0527 03:55:44.598034 2692 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"418642a7e13a62873246918855b85fb2a340a793134302e8f91a8e01eb23da56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-75545d5bdf-bkh5l" May 27 03:55:44.598072 kubelet[2692]: E0527 03:55:44.598054 2692 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"418642a7e13a62873246918855b85fb2a340a793134302e8f91a8e01eb23da56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-75545d5bdf-bkh5l" May 27 03:55:44.598170 kubelet[2692]: E0527 03:55:44.598117 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-75545d5bdf-bkh5l_calico-system(d147b7e2-07eb-4a3a-87d4-09d9a681d7e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-75545d5bdf-bkh5l_calico-system(d147b7e2-07eb-4a3a-87d4-09d9a681d7e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"418642a7e13a62873246918855b85fb2a340a793134302e8f91a8e01eb23da56\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-75545d5bdf-bkh5l" podUID="d147b7e2-07eb-4a3a-87d4-09d9a681d7e9" May 27 03:55:44.612886 containerd[1570]: time="2025-05-27T03:55:44.612803435Z" level=error msg="Failed to destroy network for sandbox \"8103261611c9bcb1a94395e6310aa41d22db1f3778c94cfcaf392b3733233287\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:55:44.616744 containerd[1570]: time="2025-05-27T03:55:44.616716945Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xpzxg,Uid:929ede96-a62c-46af-99bf-c380684a0a25,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8103261611c9bcb1a94395e6310aa41d22db1f3778c94cfcaf392b3733233287\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:55:44.620144 kubelet[2692]: E0527 03:55:44.620061 2692 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8103261611c9bcb1a94395e6310aa41d22db1f3778c94cfcaf392b3733233287\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:55:44.620144 kubelet[2692]: E0527 03:55:44.620133 2692 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8103261611c9bcb1a94395e6310aa41d22db1f3778c94cfcaf392b3733233287\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-xpzxg" May 27 03:55:44.620267 kubelet[2692]: E0527 03:55:44.620152 2692 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8103261611c9bcb1a94395e6310aa41d22db1f3778c94cfcaf392b3733233287\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-xpzxg" May 27 03:55:44.620519 kubelet[2692]: E0527 03:55:44.620253 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-xpzxg_kube-system(929ede96-a62c-46af-99bf-c380684a0a25)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-xpzxg_kube-system(929ede96-a62c-46af-99bf-c380684a0a25)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8103261611c9bcb1a94395e6310aa41d22db1f3778c94cfcaf392b3733233287\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-xpzxg" podUID="929ede96-a62c-46af-99bf-c380684a0a25" May 27 03:55:44.637499 containerd[1570]: time="2025-05-27T03:55:44.637358769Z" level=error msg="Failed to destroy network for sandbox \"7d496dc46f5ae8b60fd6c5c9c1b1845d23d9e7820602948ff13d665204e6e60c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:55:44.640246 containerd[1570]: time="2025-05-27T03:55:44.640164850Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-n8xhd,Uid:305d83db-a282-47d8-aae7-29f765f39c27,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d496dc46f5ae8b60fd6c5c9c1b1845d23d9e7820602948ff13d665204e6e60c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:55:44.640486 kubelet[2692]: E0527 03:55:44.640438 2692 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d496dc46f5ae8b60fd6c5c9c1b1845d23d9e7820602948ff13d665204e6e60c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:55:44.640486 kubelet[2692]: E0527 03:55:44.640477 2692 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d496dc46f5ae8b60fd6c5c9c1b1845d23d9e7820602948ff13d665204e6e60c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-n8xhd" May 27 03:55:44.640552 kubelet[2692]: E0527 03:55:44.640494 2692 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d496dc46f5ae8b60fd6c5c9c1b1845d23d9e7820602948ff13d665204e6e60c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-n8xhd" May 27 03:55:44.640552 kubelet[2692]: E0527 03:55:44.640541 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-78d55f7ddc-n8xhd_calico-system(305d83db-a282-47d8-aae7-29f765f39c27)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-78d55f7ddc-n8xhd_calico-system(305d83db-a282-47d8-aae7-29f765f39c27)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d496dc46f5ae8b60fd6c5c9c1b1845d23d9e7820602948ff13d665204e6e60c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-n8xhd" podUID="305d83db-a282-47d8-aae7-29f765f39c27" May 27 03:55:44.651663 containerd[1570]: time="2025-05-27T03:55:44.651610505Z" level=error msg="Failed to destroy network for sandbox \"9742d45f805cd112941c3d9e58883028a8cb6de19b0a3e09d30afd39ada3b1de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:55:44.653833 containerd[1570]: time="2025-05-27T03:55:44.653662081Z" level=error msg="Failed to destroy network for sandbox \"4d84568a7db785ed42edf563f989bcfb9e6544b49daa22021babe72f7bf9e791\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:55:44.655230 containerd[1570]: time="2025-05-27T03:55:44.655174773Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4f448c9d-6qqwn,Uid:b12cf906-4be7-43b2-ab06-ceebfe9f484a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9742d45f805cd112941c3d9e58883028a8cb6de19b0a3e09d30afd39ada3b1de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:55:44.656949 kubelet[2692]: E0527 03:55:44.655491 2692 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9742d45f805cd112941c3d9e58883028a8cb6de19b0a3e09d30afd39ada3b1de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:55:44.656949 kubelet[2692]: E0527 03:55:44.655526 2692 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9742d45f805cd112941c3d9e58883028a8cb6de19b0a3e09d30afd39ada3b1de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f4f448c9d-6qqwn" May 27 03:55:44.656949 kubelet[2692]: E0527 03:55:44.655543 2692 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9742d45f805cd112941c3d9e58883028a8cb6de19b0a3e09d30afd39ada3b1de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f4f448c9d-6qqwn" May 27 03:55:44.657083 kubelet[2692]: E0527 03:55:44.655582 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f4f448c9d-6qqwn_calico-apiserver(b12cf906-4be7-43b2-ab06-ceebfe9f484a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f4f448c9d-6qqwn_calico-apiserver(b12cf906-4be7-43b2-ab06-ceebfe9f484a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9742d45f805cd112941c3d9e58883028a8cb6de19b0a3e09d30afd39ada3b1de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f4f448c9d-6qqwn" podUID="b12cf906-4be7-43b2-ab06-ceebfe9f484a" May 27 03:55:44.659378 containerd[1570]: time="2025-05-27T03:55:44.659342434Z" level=error msg="Failed to destroy network for sandbox \"9beb1f83cf4e0c3a96c719da2bb8a5a2a468e7900e6676c61c9b73d63d32dce6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:55:44.659488 containerd[1570]: time="2025-05-27T03:55:44.659463084Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xxzsn,Uid:787e6dba-96d7-437a-a019-e1a3390ef888,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d84568a7db785ed42edf563f989bcfb9e6544b49daa22021babe72f7bf9e791\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:55:44.660972 containerd[1570]: time="2025-05-27T03:55:44.660326081Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68b896d46-g2wtg,Uid:5900b6a9-212e-4f83-ab2e-42e0e9d86137,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9beb1f83cf4e0c3a96c719da2bb8a5a2a468e7900e6676c61c9b73d63d32dce6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:55:44.661026 kubelet[2692]: E0527 03:55:44.660254 2692 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d84568a7db785ed42edf563f989bcfb9e6544b49daa22021babe72f7bf9e791\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:55:44.661026 kubelet[2692]: E0527 03:55:44.660280 2692 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d84568a7db785ed42edf563f989bcfb9e6544b49daa22021babe72f7bf9e791\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-xxzsn" May 27 03:55:44.661026 kubelet[2692]: E0527 03:55:44.660294 2692 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d84568a7db785ed42edf563f989bcfb9e6544b49daa22021babe72f7bf9e791\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-xxzsn" May 27 03:55:44.661100 kubelet[2692]: E0527 03:55:44.660329 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-xxzsn_kube-system(787e6dba-96d7-437a-a019-e1a3390ef888)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-xxzsn_kube-system(787e6dba-96d7-437a-a019-e1a3390ef888)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4d84568a7db785ed42edf563f989bcfb9e6544b49daa22021babe72f7bf9e791\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-xxzsn" podUID="787e6dba-96d7-437a-a019-e1a3390ef888" May 27 03:55:44.661829 kubelet[2692]: E0527 03:55:44.661312 2692 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9beb1f83cf4e0c3a96c719da2bb8a5a2a468e7900e6676c61c9b73d63d32dce6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:55:44.661829 kubelet[2692]: E0527 03:55:44.661395 2692 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9beb1f83cf4e0c3a96c719da2bb8a5a2a468e7900e6676c61c9b73d63d32dce6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68b896d46-g2wtg" May 27 03:55:44.661829 kubelet[2692]: E0527 03:55:44.661438 2692 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9beb1f83cf4e0c3a96c719da2bb8a5a2a468e7900e6676c61c9b73d63d32dce6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68b896d46-g2wtg" May 27 03:55:44.661992 kubelet[2692]: E0527 03:55:44.661472 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68b896d46-g2wtg_calico-system(5900b6a9-212e-4f83-ab2e-42e0e9d86137)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68b896d46-g2wtg_calico-system(5900b6a9-212e-4f83-ab2e-42e0e9d86137)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9beb1f83cf4e0c3a96c719da2bb8a5a2a468e7900e6676c61c9b73d63d32dce6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68b896d46-g2wtg" podUID="5900b6a9-212e-4f83-ab2e-42e0e9d86137" May 27 03:55:44.673271 containerd[1570]: time="2025-05-27T03:55:44.672327511Z" level=error msg="Failed to destroy network for sandbox \"adada9e2ed805b60c73420abd94a843150bb284b18862db29fc386c6c24c8784\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:55:44.673821 containerd[1570]: time="2025-05-27T03:55:44.673609690Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4f448c9d-kp96l,Uid:4087922b-0418-477d-a845-ce56592933b9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"adada9e2ed805b60c73420abd94a843150bb284b18862db29fc386c6c24c8784\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:55:44.673935 kubelet[2692]: E0527 03:55:44.673820 2692 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adada9e2ed805b60c73420abd94a843150bb284b18862db29fc386c6c24c8784\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:55:44.673935 kubelet[2692]: E0527 03:55:44.673861 2692 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adada9e2ed805b60c73420abd94a843150bb284b18862db29fc386c6c24c8784\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f4f448c9d-kp96l" May 27 03:55:44.673935 kubelet[2692]: E0527 03:55:44.673881 2692 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adada9e2ed805b60c73420abd94a843150bb284b18862db29fc386c6c24c8784\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f4f448c9d-kp96l" May 27 03:55:44.674013 kubelet[2692]: E0527 03:55:44.673939 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f4f448c9d-kp96l_calico-apiserver(4087922b-0418-477d-a845-ce56592933b9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f4f448c9d-kp96l_calico-apiserver(4087922b-0418-477d-a845-ce56592933b9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"adada9e2ed805b60c73420abd94a843150bb284b18862db29fc386c6c24c8784\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f4f448c9d-kp96l" podUID="4087922b-0418-477d-a845-ce56592933b9" May 27 03:55:44.834557 containerd[1570]: time="2025-05-27T03:55:44.834535774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 27 03:55:45.436788 systemd[1]: run-netns-cni\x2da86a7efa\x2d1ad0\x2de7a0\x2dfa53\x2d308b03d461c8.mount: Deactivated successfully. May 27 03:55:45.437330 systemd[1]: run-netns-cni\x2de6c97038\x2ddb09\x2d3af6\x2dbbda\x2dc2f0f097b5fe.mount: Deactivated successfully. May 27 03:55:45.437472 systemd[1]: run-netns-cni\x2d958e91cf\x2d0e73\x2d28ae\x2d3efd\x2db33191823f22.mount: Deactivated successfully. May 27 03:55:45.437628 systemd[1]: run-netns-cni\x2d818f01b3\x2d7658\x2daaa3\x2ddef4\x2d2ebcae2b4108.mount: Deactivated successfully. May 27 03:55:45.437776 systemd[1]: run-netns-cni\x2df2ea5c35\x2d2ba9\x2de1ca\x2db1ed\x2dfca5d7dc220c.mount: Deactivated successfully. May 27 03:55:45.758707 systemd[1]: Created slice kubepods-besteffort-pod974f6c14_b459_4b2b_89e5_34bfda1490bf.slice - libcontainer container kubepods-besteffort-pod974f6c14_b459_4b2b_89e5_34bfda1490bf.slice. May 27 03:55:45.761159 containerd[1570]: time="2025-05-27T03:55:45.760909047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pc498,Uid:974f6c14-b459-4b2b-89e5-34bfda1490bf,Namespace:calico-system,Attempt:0,}" May 27 03:55:45.845567 containerd[1570]: time="2025-05-27T03:55:45.845512898Z" level=error msg="Failed to destroy network for sandbox \"578fda2ff9d8af7fb6284f9be82afd73097ec370f3b63a07c1b2f7c8daef4dc6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:55:45.851940 containerd[1570]: time="2025-05-27T03:55:45.848154556Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pc498,Uid:974f6c14-b459-4b2b-89e5-34bfda1490bf,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"578fda2ff9d8af7fb6284f9be82afd73097ec370f3b63a07c1b2f7c8daef4dc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:55:45.852020 kubelet[2692]: E0527 03:55:45.849771 2692 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"578fda2ff9d8af7fb6284f9be82afd73097ec370f3b63a07c1b2f7c8daef4dc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:55:45.852020 kubelet[2692]: E0527 03:55:45.849819 2692 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"578fda2ff9d8af7fb6284f9be82afd73097ec370f3b63a07c1b2f7c8daef4dc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pc498" May 27 03:55:45.852020 kubelet[2692]: E0527 03:55:45.849846 2692 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"578fda2ff9d8af7fb6284f9be82afd73097ec370f3b63a07c1b2f7c8daef4dc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pc498" May 27 03:55:45.849030 systemd[1]: run-netns-cni\x2dfa1762f1\x2dd968\x2d2846\x2d7a78\x2dc112aa6b57b8.mount: Deactivated successfully. May 27 03:55:45.852447 kubelet[2692]: E0527 03:55:45.849887 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-pc498_calico-system(974f6c14-b459-4b2b-89e5-34bfda1490bf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-pc498_calico-system(974f6c14-b459-4b2b-89e5-34bfda1490bf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"578fda2ff9d8af7fb6284f9be82afd73097ec370f3b63a07c1b2f7c8daef4dc6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pc498" podUID="974f6c14-b459-4b2b-89e5-34bfda1490bf" May 27 03:55:48.186879 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3755905023.mount: Deactivated successfully. May 27 03:55:48.217336 containerd[1570]: time="2025-05-27T03:55:48.217280074Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:48.218341 containerd[1570]: time="2025-05-27T03:55:48.218249870Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=156396372" May 27 03:55:48.218991 containerd[1570]: time="2025-05-27T03:55:48.218956283Z" level=info msg="ImageCreate event name:\"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:48.221223 containerd[1570]: time="2025-05-27T03:55:48.220799073Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:48.221465 containerd[1570]: time="2025-05-27T03:55:48.221422877Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"156396234\" in 3.386766692s" May 27 03:55:48.221683 containerd[1570]: time="2025-05-27T03:55:48.221466698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\"" May 27 03:55:48.242608 containerd[1570]: time="2025-05-27T03:55:48.242582197Z" level=info msg="CreateContainer within sandbox \"801453188ceb7cda0b11bd7a5fb36095b68979c28145bb12a8a10739eb842879\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 27 03:55:48.253573 containerd[1570]: time="2025-05-27T03:55:48.252367093Z" level=info msg="Container 9e73ba597a7d1bf517c7e6779b497bc7933bb786763613c9a7908b27f2aa4613: CDI devices from CRI Config.CDIDevices: []" May 27 03:55:48.262849 containerd[1570]: time="2025-05-27T03:55:48.262811422Z" level=info msg="CreateContainer within sandbox \"801453188ceb7cda0b11bd7a5fb36095b68979c28145bb12a8a10739eb842879\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9e73ba597a7d1bf517c7e6779b497bc7933bb786763613c9a7908b27f2aa4613\"" May 27 03:55:48.263363 containerd[1570]: time="2025-05-27T03:55:48.263323625Z" level=info msg="StartContainer for \"9e73ba597a7d1bf517c7e6779b497bc7933bb786763613c9a7908b27f2aa4613\"" May 27 03:55:48.264872 containerd[1570]: time="2025-05-27T03:55:48.264835893Z" level=info msg="connecting to shim 9e73ba597a7d1bf517c7e6779b497bc7933bb786763613c9a7908b27f2aa4613" address="unix:///run/containerd/s/f002b9e5bf5b33286cc16798bd0e4fff4ea2a3be0c2e44c5a0f50a1c7a51e905" protocol=ttrpc version=3 May 27 03:55:48.327300 systemd[1]: Started cri-containerd-9e73ba597a7d1bf517c7e6779b497bc7933bb786763613c9a7908b27f2aa4613.scope - libcontainer container 9e73ba597a7d1bf517c7e6779b497bc7933bb786763613c9a7908b27f2aa4613. May 27 03:55:48.379767 containerd[1570]: time="2025-05-27T03:55:48.379681444Z" level=info msg="StartContainer for \"9e73ba597a7d1bf517c7e6779b497bc7933bb786763613c9a7908b27f2aa4613\" returns successfully" May 27 03:55:48.470768 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 27 03:55:48.470843 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 27 03:55:48.634416 kubelet[2692]: I0527 03:55:48.634374 2692 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d147b7e2-07eb-4a3a-87d4-09d9a681d7e9-whisker-ca-bundle\") pod \"d147b7e2-07eb-4a3a-87d4-09d9a681d7e9\" (UID: \"d147b7e2-07eb-4a3a-87d4-09d9a681d7e9\") " May 27 03:55:48.634416 kubelet[2692]: I0527 03:55:48.634417 2692 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sq9fm\" (UniqueName: \"kubernetes.io/projected/d147b7e2-07eb-4a3a-87d4-09d9a681d7e9-kube-api-access-sq9fm\") pod \"d147b7e2-07eb-4a3a-87d4-09d9a681d7e9\" (UID: \"d147b7e2-07eb-4a3a-87d4-09d9a681d7e9\") " May 27 03:55:48.636117 kubelet[2692]: I0527 03:55:48.634447 2692 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d147b7e2-07eb-4a3a-87d4-09d9a681d7e9-whisker-backend-key-pair\") pod \"d147b7e2-07eb-4a3a-87d4-09d9a681d7e9\" (UID: \"d147b7e2-07eb-4a3a-87d4-09d9a681d7e9\") " May 27 03:55:48.636117 kubelet[2692]: I0527 03:55:48.635448 2692 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d147b7e2-07eb-4a3a-87d4-09d9a681d7e9-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "d147b7e2-07eb-4a3a-87d4-09d9a681d7e9" (UID: "d147b7e2-07eb-4a3a-87d4-09d9a681d7e9"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 27 03:55:48.649203 kubelet[2692]: I0527 03:55:48.649148 2692 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d147b7e2-07eb-4a3a-87d4-09d9a681d7e9-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "d147b7e2-07eb-4a3a-87d4-09d9a681d7e9" (UID: "d147b7e2-07eb-4a3a-87d4-09d9a681d7e9"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 27 03:55:48.649504 kubelet[2692]: I0527 03:55:48.649461 2692 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d147b7e2-07eb-4a3a-87d4-09d9a681d7e9-kube-api-access-sq9fm" (OuterVolumeSpecName: "kube-api-access-sq9fm") pod "d147b7e2-07eb-4a3a-87d4-09d9a681d7e9" (UID: "d147b7e2-07eb-4a3a-87d4-09d9a681d7e9"). InnerVolumeSpecName "kube-api-access-sq9fm". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 03:55:48.736233 kubelet[2692]: I0527 03:55:48.735354 2692 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d147b7e2-07eb-4a3a-87d4-09d9a681d7e9-whisker-ca-bundle\") on node \"172-237-145-45\" DevicePath \"\"" May 27 03:55:48.736342 kubelet[2692]: I0527 03:55:48.736261 2692 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sq9fm\" (UniqueName: \"kubernetes.io/projected/d147b7e2-07eb-4a3a-87d4-09d9a681d7e9-kube-api-access-sq9fm\") on node \"172-237-145-45\" DevicePath \"\"" May 27 03:55:48.736342 kubelet[2692]: I0527 03:55:48.736281 2692 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d147b7e2-07eb-4a3a-87d4-09d9a681d7e9-whisker-backend-key-pair\") on node \"172-237-145-45\" DevicePath \"\"" May 27 03:55:48.855717 systemd[1]: Removed slice kubepods-besteffort-podd147b7e2_07eb_4a3a_87d4_09d9a681d7e9.slice - libcontainer container kubepods-besteffort-podd147b7e2_07eb_4a3a_87d4_09d9a681d7e9.slice. May 27 03:55:48.871010 kubelet[2692]: I0527 03:55:48.869623 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-szq4j" podStartSLOduration=1.301348312 podStartE2EDuration="9.86961099s" podCreationTimestamp="2025-05-27 03:55:39 +0000 UTC" firstStartedPulling="2025-05-27 03:55:39.654256646 +0000 UTC m=+18.002075722" lastFinishedPulling="2025-05-27 03:55:48.222519324 +0000 UTC m=+26.570338400" observedRunningTime="2025-05-27 03:55:48.866393112 +0000 UTC m=+27.214212188" watchObservedRunningTime="2025-05-27 03:55:48.86961099 +0000 UTC m=+27.217430066" May 27 03:55:48.916516 systemd[1]: Created slice kubepods-besteffort-poddc5ba6bb_b92b_4490_90f8_193e957a2beb.slice - libcontainer container kubepods-besteffort-poddc5ba6bb_b92b_4490_90f8_193e957a2beb.slice. May 27 03:55:48.937724 kubelet[2692]: I0527 03:55:48.937697 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dc5ba6bb-b92b-4490-90f8-193e957a2beb-whisker-backend-key-pair\") pod \"whisker-6cdf9fdb77-bk2d7\" (UID: \"dc5ba6bb-b92b-4490-90f8-193e957a2beb\") " pod="calico-system/whisker-6cdf9fdb77-bk2d7" May 27 03:55:48.938848 kubelet[2692]: I0527 03:55:48.938431 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmhc8\" (UniqueName: \"kubernetes.io/projected/dc5ba6bb-b92b-4490-90f8-193e957a2beb-kube-api-access-dmhc8\") pod \"whisker-6cdf9fdb77-bk2d7\" (UID: \"dc5ba6bb-b92b-4490-90f8-193e957a2beb\") " pod="calico-system/whisker-6cdf9fdb77-bk2d7" May 27 03:55:48.938848 kubelet[2692]: I0527 03:55:48.938486 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc5ba6bb-b92b-4490-90f8-193e957a2beb-whisker-ca-bundle\") pod \"whisker-6cdf9fdb77-bk2d7\" (UID: \"dc5ba6bb-b92b-4490-90f8-193e957a2beb\") " pod="calico-system/whisker-6cdf9fdb77-bk2d7" May 27 03:55:49.189457 systemd[1]: var-lib-kubelet-pods-d147b7e2\x2d07eb\x2d4a3a\x2d87d4\x2d09d9a681d7e9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsq9fm.mount: Deactivated successfully. May 27 03:55:49.189568 systemd[1]: var-lib-kubelet-pods-d147b7e2\x2d07eb\x2d4a3a\x2d87d4\x2d09d9a681d7e9-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 27 03:55:49.221896 containerd[1570]: time="2025-05-27T03:55:49.221841881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6cdf9fdb77-bk2d7,Uid:dc5ba6bb-b92b-4490-90f8-193e957a2beb,Namespace:calico-system,Attempt:0,}" May 27 03:55:49.352491 systemd-networkd[1472]: cali12372396983: Link UP May 27 03:55:49.353169 systemd-networkd[1472]: cali12372396983: Gained carrier May 27 03:55:49.368790 containerd[1570]: 2025-05-27 03:55:49.245 [INFO][3778] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 03:55:49.368790 containerd[1570]: 2025-05-27 03:55:49.282 [INFO][3778] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--145--45-k8s-whisker--6cdf9fdb77--bk2d7-eth0 whisker-6cdf9fdb77- calico-system dc5ba6bb-b92b-4490-90f8-193e957a2beb 877 0 2025-05-27 03:55:48 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6cdf9fdb77 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-237-145-45 whisker-6cdf9fdb77-bk2d7 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali12372396983 [] [] }} ContainerID="03cb00ddada016c553952d01809ad5253784d16cb8b0ada33210ee4fd7ea883c" Namespace="calico-system" Pod="whisker-6cdf9fdb77-bk2d7" WorkloadEndpoint="172--237--145--45-k8s-whisker--6cdf9fdb77--bk2d7-" May 27 03:55:49.368790 containerd[1570]: 2025-05-27 03:55:49.282 [INFO][3778] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="03cb00ddada016c553952d01809ad5253784d16cb8b0ada33210ee4fd7ea883c" Namespace="calico-system" Pod="whisker-6cdf9fdb77-bk2d7" WorkloadEndpoint="172--237--145--45-k8s-whisker--6cdf9fdb77--bk2d7-eth0" May 27 03:55:49.368790 containerd[1570]: 2025-05-27 03:55:49.306 [INFO][3790] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="03cb00ddada016c553952d01809ad5253784d16cb8b0ada33210ee4fd7ea883c" HandleID="k8s-pod-network.03cb00ddada016c553952d01809ad5253784d16cb8b0ada33210ee4fd7ea883c" Workload="172--237--145--45-k8s-whisker--6cdf9fdb77--bk2d7-eth0" May 27 03:55:49.368980 containerd[1570]: 2025-05-27 03:55:49.306 [INFO][3790] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="03cb00ddada016c553952d01809ad5253784d16cb8b0ada33210ee4fd7ea883c" HandleID="k8s-pod-network.03cb00ddada016c553952d01809ad5253784d16cb8b0ada33210ee4fd7ea883c" Workload="172--237--145--45-k8s-whisker--6cdf9fdb77--bk2d7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9020), Attrs:map[string]string{"namespace":"calico-system", "node":"172-237-145-45", "pod":"whisker-6cdf9fdb77-bk2d7", "timestamp":"2025-05-27 03:55:49.306151327 +0000 UTC"}, Hostname:"172-237-145-45", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:55:49.368980 containerd[1570]: 2025-05-27 03:55:49.306 [INFO][3790] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:55:49.368980 containerd[1570]: 2025-05-27 03:55:49.306 [INFO][3790] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:55:49.368980 containerd[1570]: 2025-05-27 03:55:49.306 [INFO][3790] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-145-45' May 27 03:55:49.368980 containerd[1570]: 2025-05-27 03:55:49.313 [INFO][3790] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.03cb00ddada016c553952d01809ad5253784d16cb8b0ada33210ee4fd7ea883c" host="172-237-145-45" May 27 03:55:49.368980 containerd[1570]: 2025-05-27 03:55:49.320 [INFO][3790] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-145-45" May 27 03:55:49.368980 containerd[1570]: 2025-05-27 03:55:49.325 [INFO][3790] ipam/ipam.go 511: Trying affinity for 192.168.75.192/26 host="172-237-145-45" May 27 03:55:49.368980 containerd[1570]: 2025-05-27 03:55:49.326 [INFO][3790] ipam/ipam.go 158: Attempting to load block cidr=192.168.75.192/26 host="172-237-145-45" May 27 03:55:49.368980 containerd[1570]: 2025-05-27 03:55:49.328 [INFO][3790] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.75.192/26 host="172-237-145-45" May 27 03:55:49.368980 containerd[1570]: 2025-05-27 03:55:49.328 [INFO][3790] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.75.192/26 handle="k8s-pod-network.03cb00ddada016c553952d01809ad5253784d16cb8b0ada33210ee4fd7ea883c" host="172-237-145-45" May 27 03:55:49.369172 containerd[1570]: 2025-05-27 03:55:49.329 [INFO][3790] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.03cb00ddada016c553952d01809ad5253784d16cb8b0ada33210ee4fd7ea883c May 27 03:55:49.369172 containerd[1570]: 2025-05-27 03:55:49.332 [INFO][3790] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.75.192/26 handle="k8s-pod-network.03cb00ddada016c553952d01809ad5253784d16cb8b0ada33210ee4fd7ea883c" host="172-237-145-45" May 27 03:55:49.369172 containerd[1570]: 2025-05-27 03:55:49.338 [INFO][3790] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.75.193/26] block=192.168.75.192/26 handle="k8s-pod-network.03cb00ddada016c553952d01809ad5253784d16cb8b0ada33210ee4fd7ea883c" host="172-237-145-45" May 27 03:55:49.369172 containerd[1570]: 2025-05-27 03:55:49.338 [INFO][3790] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.75.193/26] handle="k8s-pod-network.03cb00ddada016c553952d01809ad5253784d16cb8b0ada33210ee4fd7ea883c" host="172-237-145-45" May 27 03:55:49.369172 containerd[1570]: 2025-05-27 03:55:49.338 [INFO][3790] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:55:49.369172 containerd[1570]: 2025-05-27 03:55:49.338 [INFO][3790] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.193/26] IPv6=[] ContainerID="03cb00ddada016c553952d01809ad5253784d16cb8b0ada33210ee4fd7ea883c" HandleID="k8s-pod-network.03cb00ddada016c553952d01809ad5253784d16cb8b0ada33210ee4fd7ea883c" Workload="172--237--145--45-k8s-whisker--6cdf9fdb77--bk2d7-eth0" May 27 03:55:49.369300 containerd[1570]: 2025-05-27 03:55:49.342 [INFO][3778] cni-plugin/k8s.go 418: Populated endpoint ContainerID="03cb00ddada016c553952d01809ad5253784d16cb8b0ada33210ee4fd7ea883c" Namespace="calico-system" Pod="whisker-6cdf9fdb77-bk2d7" WorkloadEndpoint="172--237--145--45-k8s-whisker--6cdf9fdb77--bk2d7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--145--45-k8s-whisker--6cdf9fdb77--bk2d7-eth0", GenerateName:"whisker-6cdf9fdb77-", Namespace:"calico-system", SelfLink:"", UID:"dc5ba6bb-b92b-4490-90f8-193e957a2beb", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 55, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6cdf9fdb77", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-145-45", ContainerID:"", Pod:"whisker-6cdf9fdb77-bk2d7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.75.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali12372396983", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:55:49.369300 containerd[1570]: 2025-05-27 03:55:49.343 [INFO][3778] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.193/32] ContainerID="03cb00ddada016c553952d01809ad5253784d16cb8b0ada33210ee4fd7ea883c" Namespace="calico-system" Pod="whisker-6cdf9fdb77-bk2d7" WorkloadEndpoint="172--237--145--45-k8s-whisker--6cdf9fdb77--bk2d7-eth0" May 27 03:55:49.369370 containerd[1570]: 2025-05-27 03:55:49.343 [INFO][3778] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali12372396983 ContainerID="03cb00ddada016c553952d01809ad5253784d16cb8b0ada33210ee4fd7ea883c" Namespace="calico-system" Pod="whisker-6cdf9fdb77-bk2d7" WorkloadEndpoint="172--237--145--45-k8s-whisker--6cdf9fdb77--bk2d7-eth0" May 27 03:55:49.369370 containerd[1570]: 2025-05-27 03:55:49.354 [INFO][3778] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="03cb00ddada016c553952d01809ad5253784d16cb8b0ada33210ee4fd7ea883c" Namespace="calico-system" Pod="whisker-6cdf9fdb77-bk2d7" WorkloadEndpoint="172--237--145--45-k8s-whisker--6cdf9fdb77--bk2d7-eth0" May 27 03:55:49.369412 containerd[1570]: 2025-05-27 03:55:49.354 [INFO][3778] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="03cb00ddada016c553952d01809ad5253784d16cb8b0ada33210ee4fd7ea883c" Namespace="calico-system" Pod="whisker-6cdf9fdb77-bk2d7" WorkloadEndpoint="172--237--145--45-k8s-whisker--6cdf9fdb77--bk2d7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--145--45-k8s-whisker--6cdf9fdb77--bk2d7-eth0", GenerateName:"whisker-6cdf9fdb77-", Namespace:"calico-system", SelfLink:"", UID:"dc5ba6bb-b92b-4490-90f8-193e957a2beb", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 55, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6cdf9fdb77", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-145-45", ContainerID:"03cb00ddada016c553952d01809ad5253784d16cb8b0ada33210ee4fd7ea883c", Pod:"whisker-6cdf9fdb77-bk2d7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.75.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali12372396983", MAC:"0e:ad:49:53:a2:f9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:55:49.369458 containerd[1570]: 2025-05-27 03:55:49.364 [INFO][3778] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="03cb00ddada016c553952d01809ad5253784d16cb8b0ada33210ee4fd7ea883c" Namespace="calico-system" Pod="whisker-6cdf9fdb77-bk2d7" WorkloadEndpoint="172--237--145--45-k8s-whisker--6cdf9fdb77--bk2d7-eth0" May 27 03:55:49.413203 containerd[1570]: time="2025-05-27T03:55:49.413142381Z" level=info msg="connecting to shim 03cb00ddada016c553952d01809ad5253784d16cb8b0ada33210ee4fd7ea883c" address="unix:///run/containerd/s/28729663ea044d5adc01eeb906a57490ab2c8a29c05c93ad7ffe243f66bbfefc" namespace=k8s.io protocol=ttrpc version=3 May 27 03:55:49.446327 systemd[1]: Started cri-containerd-03cb00ddada016c553952d01809ad5253784d16cb8b0ada33210ee4fd7ea883c.scope - libcontainer container 03cb00ddada016c553952d01809ad5253784d16cb8b0ada33210ee4fd7ea883c. May 27 03:55:49.493346 containerd[1570]: time="2025-05-27T03:55:49.493317565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6cdf9fdb77-bk2d7,Uid:dc5ba6bb-b92b-4490-90f8-193e957a2beb,Namespace:calico-system,Attempt:0,} returns sandbox id \"03cb00ddada016c553952d01809ad5253784d16cb8b0ada33210ee4fd7ea883c\"" May 27 03:55:49.496333 containerd[1570]: time="2025-05-27T03:55:49.495282885Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 03:55:49.594053 containerd[1570]: time="2025-05-27T03:55:49.594003477Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:55:49.595176 containerd[1570]: time="2025-05-27T03:55:49.595123442Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 03:55:49.595424 containerd[1570]: time="2025-05-27T03:55:49.595125282Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:55:49.596903 kubelet[2692]: E0527 03:55:49.596865 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:55:49.597033 kubelet[2692]: E0527 03:55:49.596913 2692 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:55:49.597082 kubelet[2692]: E0527 03:55:49.597039 2692 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a533b5daea634b52a7b2eac27807f9ca,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dmhc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6cdf9fdb77-bk2d7_calico-system(dc5ba6bb-b92b-4490-90f8-193e957a2beb): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:55:49.599093 containerd[1570]: time="2025-05-27T03:55:49.599066723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 03:55:49.701716 containerd[1570]: time="2025-05-27T03:55:49.700542259Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:55:49.701833 containerd[1570]: time="2025-05-27T03:55:49.701696665Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:55:49.701902 containerd[1570]: time="2025-05-27T03:55:49.701862316Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 03:55:49.702471 kubelet[2692]: E0527 03:55:49.702061 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:55:49.702471 kubelet[2692]: E0527 03:55:49.702447 2692 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:55:49.704408 kubelet[2692]: E0527 03:55:49.703421 2692 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dmhc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6cdf9fdb77-bk2d7_calico-system(dc5ba6bb-b92b-4490-90f8-193e957a2beb): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:55:49.704656 kubelet[2692]: E0527 03:55:49.704609 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6cdf9fdb77-bk2d7" podUID="dc5ba6bb-b92b-4490-90f8-193e957a2beb" May 27 03:55:49.751926 kubelet[2692]: I0527 03:55:49.751892 2692 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d147b7e2-07eb-4a3a-87d4-09d9a681d7e9" path="/var/lib/kubelet/pods/d147b7e2-07eb-4a3a-87d4-09d9a681d7e9/volumes" May 27 03:55:49.854274 kubelet[2692]: I0527 03:55:49.854227 2692 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:55:49.856906 kubelet[2692]: E0527 03:55:49.856662 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6cdf9fdb77-bk2d7" podUID="dc5ba6bb-b92b-4490-90f8-193e957a2beb" May 27 03:55:50.397340 systemd-networkd[1472]: cali12372396983: Gained IPv6LL May 27 03:55:50.857543 kubelet[2692]: E0527 03:55:50.857132 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6cdf9fdb77-bk2d7" podUID="dc5ba6bb-b92b-4490-90f8-193e957a2beb" May 27 03:55:55.253469 kubelet[2692]: I0527 03:55:55.253260 2692 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:55:55.254386 kubelet[2692]: E0527 03:55:55.254325 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:55:55.750467 containerd[1570]: time="2025-05-27T03:55:55.750430889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4f448c9d-6qqwn,Uid:b12cf906-4be7-43b2-ab06-ceebfe9f484a,Namespace:calico-apiserver,Attempt:0,}" May 27 03:55:55.847230 systemd-networkd[1472]: cali34708b99b95: Link UP May 27 03:55:55.847459 systemd-networkd[1472]: cali34708b99b95: Gained carrier May 27 03:55:55.858344 containerd[1570]: 2025-05-27 03:55:55.774 [INFO][4073] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 03:55:55.858344 containerd[1570]: 2025-05-27 03:55:55.785 [INFO][4073] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--145--45-k8s-calico--apiserver--f4f448c9d--6qqwn-eth0 calico-apiserver-f4f448c9d- calico-apiserver b12cf906-4be7-43b2-ab06-ceebfe9f484a 808 0 2025-05-27 03:55:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f4f448c9d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-237-145-45 calico-apiserver-f4f448c9d-6qqwn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali34708b99b95 [] [] }} ContainerID="83cb036ecc3be99f8689f5f9ee05e796a3e52e844bc3c37917cd1887f33541d1" Namespace="calico-apiserver" Pod="calico-apiserver-f4f448c9d-6qqwn" WorkloadEndpoint="172--237--145--45-k8s-calico--apiserver--f4f448c9d--6qqwn-" May 27 03:55:55.858344 containerd[1570]: 2025-05-27 03:55:55.785 [INFO][4073] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="83cb036ecc3be99f8689f5f9ee05e796a3e52e844bc3c37917cd1887f33541d1" Namespace="calico-apiserver" Pod="calico-apiserver-f4f448c9d-6qqwn" WorkloadEndpoint="172--237--145--45-k8s-calico--apiserver--f4f448c9d--6qqwn-eth0" May 27 03:55:55.858344 containerd[1570]: 2025-05-27 03:55:55.811 [INFO][4085] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="83cb036ecc3be99f8689f5f9ee05e796a3e52e844bc3c37917cd1887f33541d1" HandleID="k8s-pod-network.83cb036ecc3be99f8689f5f9ee05e796a3e52e844bc3c37917cd1887f33541d1" Workload="172--237--145--45-k8s-calico--apiserver--f4f448c9d--6qqwn-eth0" May 27 03:55:55.858529 containerd[1570]: 2025-05-27 03:55:55.811 [INFO][4085] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="83cb036ecc3be99f8689f5f9ee05e796a3e52e844bc3c37917cd1887f33541d1" HandleID="k8s-pod-network.83cb036ecc3be99f8689f5f9ee05e796a3e52e844bc3c37917cd1887f33541d1" Workload="172--237--145--45-k8s-calico--apiserver--f4f448c9d--6qqwn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032d4e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-237-145-45", "pod":"calico-apiserver-f4f448c9d-6qqwn", "timestamp":"2025-05-27 03:55:55.811475298 +0000 UTC"}, Hostname:"172-237-145-45", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:55:55.858529 containerd[1570]: 2025-05-27 03:55:55.811 [INFO][4085] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:55:55.858529 containerd[1570]: 2025-05-27 03:55:55.811 [INFO][4085] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:55:55.858529 containerd[1570]: 2025-05-27 03:55:55.811 [INFO][4085] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-145-45' May 27 03:55:55.858529 containerd[1570]: 2025-05-27 03:55:55.817 [INFO][4085] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.83cb036ecc3be99f8689f5f9ee05e796a3e52e844bc3c37917cd1887f33541d1" host="172-237-145-45" May 27 03:55:55.858529 containerd[1570]: 2025-05-27 03:55:55.823 [INFO][4085] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-145-45" May 27 03:55:55.858529 containerd[1570]: 2025-05-27 03:55:55.827 [INFO][4085] ipam/ipam.go 511: Trying affinity for 192.168.75.192/26 host="172-237-145-45" May 27 03:55:55.858529 containerd[1570]: 2025-05-27 03:55:55.829 [INFO][4085] ipam/ipam.go 158: Attempting to load block cidr=192.168.75.192/26 host="172-237-145-45" May 27 03:55:55.858529 containerd[1570]: 2025-05-27 03:55:55.830 [INFO][4085] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.75.192/26 host="172-237-145-45" May 27 03:55:55.858712 containerd[1570]: 2025-05-27 03:55:55.831 [INFO][4085] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.75.192/26 handle="k8s-pod-network.83cb036ecc3be99f8689f5f9ee05e796a3e52e844bc3c37917cd1887f33541d1" host="172-237-145-45" May 27 03:55:55.858712 containerd[1570]: 2025-05-27 03:55:55.832 [INFO][4085] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.83cb036ecc3be99f8689f5f9ee05e796a3e52e844bc3c37917cd1887f33541d1 May 27 03:55:55.858712 containerd[1570]: 2025-05-27 03:55:55.836 [INFO][4085] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.75.192/26 handle="k8s-pod-network.83cb036ecc3be99f8689f5f9ee05e796a3e52e844bc3c37917cd1887f33541d1" host="172-237-145-45" May 27 03:55:55.858712 containerd[1570]: 2025-05-27 03:55:55.841 [INFO][4085] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.75.194/26] block=192.168.75.192/26 handle="k8s-pod-network.83cb036ecc3be99f8689f5f9ee05e796a3e52e844bc3c37917cd1887f33541d1" host="172-237-145-45" May 27 03:55:55.858712 containerd[1570]: 2025-05-27 03:55:55.841 [INFO][4085] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.75.194/26] handle="k8s-pod-network.83cb036ecc3be99f8689f5f9ee05e796a3e52e844bc3c37917cd1887f33541d1" host="172-237-145-45" May 27 03:55:55.858712 containerd[1570]: 2025-05-27 03:55:55.841 [INFO][4085] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:55:55.858712 containerd[1570]: 2025-05-27 03:55:55.841 [INFO][4085] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.194/26] IPv6=[] ContainerID="83cb036ecc3be99f8689f5f9ee05e796a3e52e844bc3c37917cd1887f33541d1" HandleID="k8s-pod-network.83cb036ecc3be99f8689f5f9ee05e796a3e52e844bc3c37917cd1887f33541d1" Workload="172--237--145--45-k8s-calico--apiserver--f4f448c9d--6qqwn-eth0" May 27 03:55:55.858838 containerd[1570]: 2025-05-27 03:55:55.843 [INFO][4073] cni-plugin/k8s.go 418: Populated endpoint ContainerID="83cb036ecc3be99f8689f5f9ee05e796a3e52e844bc3c37917cd1887f33541d1" Namespace="calico-apiserver" Pod="calico-apiserver-f4f448c9d-6qqwn" WorkloadEndpoint="172--237--145--45-k8s-calico--apiserver--f4f448c9d--6qqwn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--145--45-k8s-calico--apiserver--f4f448c9d--6qqwn-eth0", GenerateName:"calico-apiserver-f4f448c9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"b12cf906-4be7-43b2-ab06-ceebfe9f484a", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 55, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f4f448c9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-145-45", ContainerID:"", Pod:"calico-apiserver-f4f448c9d-6qqwn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali34708b99b95", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:55:55.858887 containerd[1570]: 2025-05-27 03:55:55.843 [INFO][4073] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.194/32] ContainerID="83cb036ecc3be99f8689f5f9ee05e796a3e52e844bc3c37917cd1887f33541d1" Namespace="calico-apiserver" Pod="calico-apiserver-f4f448c9d-6qqwn" WorkloadEndpoint="172--237--145--45-k8s-calico--apiserver--f4f448c9d--6qqwn-eth0" May 27 03:55:55.858887 containerd[1570]: 2025-05-27 03:55:55.843 [INFO][4073] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali34708b99b95 ContainerID="83cb036ecc3be99f8689f5f9ee05e796a3e52e844bc3c37917cd1887f33541d1" Namespace="calico-apiserver" Pod="calico-apiserver-f4f448c9d-6qqwn" WorkloadEndpoint="172--237--145--45-k8s-calico--apiserver--f4f448c9d--6qqwn-eth0" May 27 03:55:55.858887 containerd[1570]: 2025-05-27 03:55:55.846 [INFO][4073] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="83cb036ecc3be99f8689f5f9ee05e796a3e52e844bc3c37917cd1887f33541d1" Namespace="calico-apiserver" Pod="calico-apiserver-f4f448c9d-6qqwn" WorkloadEndpoint="172--237--145--45-k8s-calico--apiserver--f4f448c9d--6qqwn-eth0" May 27 03:55:55.858945 containerd[1570]: 2025-05-27 03:55:55.847 [INFO][4073] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="83cb036ecc3be99f8689f5f9ee05e796a3e52e844bc3c37917cd1887f33541d1" Namespace="calico-apiserver" Pod="calico-apiserver-f4f448c9d-6qqwn" WorkloadEndpoint="172--237--145--45-k8s-calico--apiserver--f4f448c9d--6qqwn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--145--45-k8s-calico--apiserver--f4f448c9d--6qqwn-eth0", GenerateName:"calico-apiserver-f4f448c9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"b12cf906-4be7-43b2-ab06-ceebfe9f484a", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 55, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f4f448c9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-145-45", ContainerID:"83cb036ecc3be99f8689f5f9ee05e796a3e52e844bc3c37917cd1887f33541d1", Pod:"calico-apiserver-f4f448c9d-6qqwn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali34708b99b95", MAC:"62:fe:25:41:24:2f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:55:55.858990 containerd[1570]: 2025-05-27 03:55:55.854 [INFO][4073] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="83cb036ecc3be99f8689f5f9ee05e796a3e52e844bc3c37917cd1887f33541d1" Namespace="calico-apiserver" Pod="calico-apiserver-f4f448c9d-6qqwn" WorkloadEndpoint="172--237--145--45-k8s-calico--apiserver--f4f448c9d--6qqwn-eth0" May 27 03:55:55.862247 kubelet[2692]: E0527 03:55:55.862168 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:55:55.881621 containerd[1570]: time="2025-05-27T03:55:55.881586469Z" level=info msg="connecting to shim 83cb036ecc3be99f8689f5f9ee05e796a3e52e844bc3c37917cd1887f33541d1" address="unix:///run/containerd/s/d3d91474ecc90c86b97c4c8824e780db83182c58218801f79cd6676ab77b5aa9" namespace=k8s.io protocol=ttrpc version=3 May 27 03:55:55.911388 systemd[1]: Started cri-containerd-83cb036ecc3be99f8689f5f9ee05e796a3e52e844bc3c37917cd1887f33541d1.scope - libcontainer container 83cb036ecc3be99f8689f5f9ee05e796a3e52e844bc3c37917cd1887f33541d1. May 27 03:55:55.958748 containerd[1570]: time="2025-05-27T03:55:55.958721754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4f448c9d-6qqwn,Uid:b12cf906-4be7-43b2-ab06-ceebfe9f484a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"83cb036ecc3be99f8689f5f9ee05e796a3e52e844bc3c37917cd1887f33541d1\"" May 27 03:55:55.960743 containerd[1570]: time="2025-05-27T03:55:55.960702121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 27 03:55:56.559829 systemd-networkd[1472]: vxlan.calico: Link UP May 27 03:55:56.559839 systemd-networkd[1472]: vxlan.calico: Gained carrier May 27 03:55:57.693869 systemd-networkd[1472]: cali34708b99b95: Gained IPv6LL May 27 03:55:57.749805 kubelet[2692]: E0527 03:55:57.749766 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:55:57.754372 containerd[1570]: time="2025-05-27T03:55:57.754294631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xxzsn,Uid:787e6dba-96d7-437a-a019-e1a3390ef888,Namespace:kube-system,Attempt:0,}" May 27 03:55:57.897319 systemd-networkd[1472]: cali95dff6645f9: Link UP May 27 03:55:57.898355 systemd-networkd[1472]: cali95dff6645f9: Gained carrier May 27 03:55:57.918576 containerd[1570]: 2025-05-27 03:55:57.818 [INFO][4233] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--145--45-k8s-coredns--674b8bbfcf--xxzsn-eth0 coredns-674b8bbfcf- kube-system 787e6dba-96d7-437a-a019-e1a3390ef888 812 0 2025-05-27 03:55:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-237-145-45 coredns-674b8bbfcf-xxzsn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali95dff6645f9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="32397e2894ba6381cfaae4387210941324e44294fb71d0a4511cb6b95b4073c0" Namespace="kube-system" Pod="coredns-674b8bbfcf-xxzsn" WorkloadEndpoint="172--237--145--45-k8s-coredns--674b8bbfcf--xxzsn-" May 27 03:55:57.918576 containerd[1570]: 2025-05-27 03:55:57.818 [INFO][4233] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="32397e2894ba6381cfaae4387210941324e44294fb71d0a4511cb6b95b4073c0" Namespace="kube-system" Pod="coredns-674b8bbfcf-xxzsn" WorkloadEndpoint="172--237--145--45-k8s-coredns--674b8bbfcf--xxzsn-eth0" May 27 03:55:57.918576 containerd[1570]: 2025-05-27 03:55:57.855 [INFO][4245] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="32397e2894ba6381cfaae4387210941324e44294fb71d0a4511cb6b95b4073c0" HandleID="k8s-pod-network.32397e2894ba6381cfaae4387210941324e44294fb71d0a4511cb6b95b4073c0" Workload="172--237--145--45-k8s-coredns--674b8bbfcf--xxzsn-eth0" May 27 03:55:57.920233 containerd[1570]: 2025-05-27 03:55:57.855 [INFO][4245] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="32397e2894ba6381cfaae4387210941324e44294fb71d0a4511cb6b95b4073c0" HandleID="k8s-pod-network.32397e2894ba6381cfaae4387210941324e44294fb71d0a4511cb6b95b4073c0" Workload="172--237--145--45-k8s-coredns--674b8bbfcf--xxzsn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000235630), Attrs:map[string]string{"namespace":"kube-system", "node":"172-237-145-45", "pod":"coredns-674b8bbfcf-xxzsn", "timestamp":"2025-05-27 03:55:57.855092207 +0000 UTC"}, Hostname:"172-237-145-45", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:55:57.920233 containerd[1570]: 2025-05-27 03:55:57.855 [INFO][4245] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:55:57.920233 containerd[1570]: 2025-05-27 03:55:57.855 [INFO][4245] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:55:57.920233 containerd[1570]: 2025-05-27 03:55:57.855 [INFO][4245] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-145-45' May 27 03:55:57.920233 containerd[1570]: 2025-05-27 03:55:57.862 [INFO][4245] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.32397e2894ba6381cfaae4387210941324e44294fb71d0a4511cb6b95b4073c0" host="172-237-145-45" May 27 03:55:57.920233 containerd[1570]: 2025-05-27 03:55:57.867 [INFO][4245] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-145-45" May 27 03:55:57.920233 containerd[1570]: 2025-05-27 03:55:57.872 [INFO][4245] ipam/ipam.go 511: Trying affinity for 192.168.75.192/26 host="172-237-145-45" May 27 03:55:57.920233 containerd[1570]: 2025-05-27 03:55:57.874 [INFO][4245] ipam/ipam.go 158: Attempting to load block cidr=192.168.75.192/26 host="172-237-145-45" May 27 03:55:57.920233 containerd[1570]: 2025-05-27 03:55:57.878 [INFO][4245] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.75.192/26 host="172-237-145-45" May 27 03:55:57.920233 containerd[1570]: 2025-05-27 03:55:57.878 [INFO][4245] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.75.192/26 handle="k8s-pod-network.32397e2894ba6381cfaae4387210941324e44294fb71d0a4511cb6b95b4073c0" host="172-237-145-45" May 27 03:55:57.920503 containerd[1570]: 2025-05-27 03:55:57.879 [INFO][4245] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.32397e2894ba6381cfaae4387210941324e44294fb71d0a4511cb6b95b4073c0 May 27 03:55:57.920503 containerd[1570]: 2025-05-27 03:55:57.883 [INFO][4245] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.75.192/26 handle="k8s-pod-network.32397e2894ba6381cfaae4387210941324e44294fb71d0a4511cb6b95b4073c0" host="172-237-145-45" May 27 03:55:57.920503 containerd[1570]: 2025-05-27 03:55:57.889 [INFO][4245] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.75.195/26] block=192.168.75.192/26 handle="k8s-pod-network.32397e2894ba6381cfaae4387210941324e44294fb71d0a4511cb6b95b4073c0" host="172-237-145-45" May 27 03:55:57.920503 containerd[1570]: 2025-05-27 03:55:57.889 [INFO][4245] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.75.195/26] handle="k8s-pod-network.32397e2894ba6381cfaae4387210941324e44294fb71d0a4511cb6b95b4073c0" host="172-237-145-45" May 27 03:55:57.920503 containerd[1570]: 2025-05-27 03:55:57.889 [INFO][4245] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:55:57.920503 containerd[1570]: 2025-05-27 03:55:57.889 [INFO][4245] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.195/26] IPv6=[] ContainerID="32397e2894ba6381cfaae4387210941324e44294fb71d0a4511cb6b95b4073c0" HandleID="k8s-pod-network.32397e2894ba6381cfaae4387210941324e44294fb71d0a4511cb6b95b4073c0" Workload="172--237--145--45-k8s-coredns--674b8bbfcf--xxzsn-eth0" May 27 03:55:57.920625 containerd[1570]: 2025-05-27 03:55:57.892 [INFO][4233] cni-plugin/k8s.go 418: Populated endpoint ContainerID="32397e2894ba6381cfaae4387210941324e44294fb71d0a4511cb6b95b4073c0" Namespace="kube-system" Pod="coredns-674b8bbfcf-xxzsn" WorkloadEndpoint="172--237--145--45-k8s-coredns--674b8bbfcf--xxzsn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--145--45-k8s-coredns--674b8bbfcf--xxzsn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"787e6dba-96d7-437a-a019-e1a3390ef888", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 55, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-145-45", ContainerID:"", Pod:"coredns-674b8bbfcf-xxzsn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali95dff6645f9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:55:57.920625 containerd[1570]: 2025-05-27 03:55:57.893 [INFO][4233] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.195/32] ContainerID="32397e2894ba6381cfaae4387210941324e44294fb71d0a4511cb6b95b4073c0" Namespace="kube-system" Pod="coredns-674b8bbfcf-xxzsn" WorkloadEndpoint="172--237--145--45-k8s-coredns--674b8bbfcf--xxzsn-eth0" May 27 03:55:57.920625 containerd[1570]: 2025-05-27 03:55:57.893 [INFO][4233] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali95dff6645f9 ContainerID="32397e2894ba6381cfaae4387210941324e44294fb71d0a4511cb6b95b4073c0" Namespace="kube-system" Pod="coredns-674b8bbfcf-xxzsn" WorkloadEndpoint="172--237--145--45-k8s-coredns--674b8bbfcf--xxzsn-eth0" May 27 03:55:57.920625 containerd[1570]: 2025-05-27 03:55:57.899 [INFO][4233] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="32397e2894ba6381cfaae4387210941324e44294fb71d0a4511cb6b95b4073c0" Namespace="kube-system" Pod="coredns-674b8bbfcf-xxzsn" WorkloadEndpoint="172--237--145--45-k8s-coredns--674b8bbfcf--xxzsn-eth0" May 27 03:55:57.920625 containerd[1570]: 2025-05-27 03:55:57.900 [INFO][4233] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="32397e2894ba6381cfaae4387210941324e44294fb71d0a4511cb6b95b4073c0" Namespace="kube-system" Pod="coredns-674b8bbfcf-xxzsn" WorkloadEndpoint="172--237--145--45-k8s-coredns--674b8bbfcf--xxzsn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--145--45-k8s-coredns--674b8bbfcf--xxzsn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"787e6dba-96d7-437a-a019-e1a3390ef888", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 55, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-145-45", ContainerID:"32397e2894ba6381cfaae4387210941324e44294fb71d0a4511cb6b95b4073c0", Pod:"coredns-674b8bbfcf-xxzsn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali95dff6645f9", MAC:"a2:c0:bc:02:d7:3a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:55:57.920625 containerd[1570]: 2025-05-27 03:55:57.912 [INFO][4233] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="32397e2894ba6381cfaae4387210941324e44294fb71d0a4511cb6b95b4073c0" Namespace="kube-system" Pod="coredns-674b8bbfcf-xxzsn" WorkloadEndpoint="172--237--145--45-k8s-coredns--674b8bbfcf--xxzsn-eth0" May 27 03:55:57.949252 containerd[1570]: time="2025-05-27T03:55:57.948221823Z" level=info msg="connecting to shim 32397e2894ba6381cfaae4387210941324e44294fb71d0a4511cb6b95b4073c0" address="unix:///run/containerd/s/be9b02806762948bd11e04791dce1a3a63cd2e373f3046497a4ed8aeef56b17d" namespace=k8s.io protocol=ttrpc version=3 May 27 03:55:57.983366 systemd[1]: Started cri-containerd-32397e2894ba6381cfaae4387210941324e44294fb71d0a4511cb6b95b4073c0.scope - libcontainer container 32397e2894ba6381cfaae4387210941324e44294fb71d0a4511cb6b95b4073c0. May 27 03:55:58.041338 containerd[1570]: time="2025-05-27T03:55:58.041177748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xxzsn,Uid:787e6dba-96d7-437a-a019-e1a3390ef888,Namespace:kube-system,Attempt:0,} returns sandbox id \"32397e2894ba6381cfaae4387210941324e44294fb71d0a4511cb6b95b4073c0\"" May 27 03:55:58.042403 kubelet[2692]: E0527 03:55:58.042333 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:55:58.047714 containerd[1570]: time="2025-05-27T03:55:58.047682156Z" level=info msg="CreateContainer within sandbox \"32397e2894ba6381cfaae4387210941324e44294fb71d0a4511cb6b95b4073c0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 03:55:58.057798 containerd[1570]: time="2025-05-27T03:55:58.057328092Z" level=info msg="Container 3e59d3ba03ed3911e18f07820950b9e43745de25141838ec823a28fbb13ff800: CDI devices from CRI Config.CDIDevices: []" May 27 03:55:58.064136 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1533368553.mount: Deactivated successfully. May 27 03:55:58.067630 containerd[1570]: time="2025-05-27T03:55:58.067583930Z" level=info msg="CreateContainer within sandbox \"32397e2894ba6381cfaae4387210941324e44294fb71d0a4511cb6b95b4073c0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3e59d3ba03ed3911e18f07820950b9e43745de25141838ec823a28fbb13ff800\"" May 27 03:55:58.068698 containerd[1570]: time="2025-05-27T03:55:58.068308712Z" level=info msg="StartContainer for \"3e59d3ba03ed3911e18f07820950b9e43745de25141838ec823a28fbb13ff800\"" May 27 03:55:58.069183 containerd[1570]: time="2025-05-27T03:55:58.069153644Z" level=info msg="connecting to shim 3e59d3ba03ed3911e18f07820950b9e43745de25141838ec823a28fbb13ff800" address="unix:///run/containerd/s/be9b02806762948bd11e04791dce1a3a63cd2e373f3046497a4ed8aeef56b17d" protocol=ttrpc version=3 May 27 03:55:58.095364 systemd[1]: Started cri-containerd-3e59d3ba03ed3911e18f07820950b9e43745de25141838ec823a28fbb13ff800.scope - libcontainer container 3e59d3ba03ed3911e18f07820950b9e43745de25141838ec823a28fbb13ff800. May 27 03:55:58.131136 containerd[1570]: time="2025-05-27T03:55:58.131063484Z" level=info msg="StartContainer for \"3e59d3ba03ed3911e18f07820950b9e43745de25141838ec823a28fbb13ff800\" returns successfully" May 27 03:55:58.269469 systemd-networkd[1472]: vxlan.calico: Gained IPv6LL May 27 03:55:58.751066 containerd[1570]: time="2025-05-27T03:55:58.750818378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-n8xhd,Uid:305d83db-a282-47d8-aae7-29f765f39c27,Namespace:calico-system,Attempt:0,}" May 27 03:55:58.876183 kubelet[2692]: E0527 03:55:58.876154 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:55:58.887725 systemd-networkd[1472]: caliab16a309a84: Link UP May 27 03:55:58.888064 systemd-networkd[1472]: caliab16a309a84: Gained carrier May 27 03:55:58.910426 kubelet[2692]: I0527 03:55:58.909654 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-xxzsn" podStartSLOduration=30.909640643 podStartE2EDuration="30.909640643s" podCreationTimestamp="2025-05-27 03:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:55:58.898292561 +0000 UTC m=+37.246111657" watchObservedRunningTime="2025-05-27 03:55:58.909640643 +0000 UTC m=+37.257459729" May 27 03:55:58.916034 containerd[1570]: 2025-05-27 03:55:58.789 [INFO][4344] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--145--45-k8s-goldmane--78d55f7ddc--n8xhd-eth0 goldmane-78d55f7ddc- calico-system 305d83db-a282-47d8-aae7-29f765f39c27 818 0 2025-05-27 03:55:39 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:78d55f7ddc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 172-237-145-45 goldmane-78d55f7ddc-n8xhd eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] caliab16a309a84 [] [] }} ContainerID="71bd5531f51409429de110927b81d40d1301c9bdbcecc1c39df63bf0cee7d64e" Namespace="calico-system" Pod="goldmane-78d55f7ddc-n8xhd" WorkloadEndpoint="172--237--145--45-k8s-goldmane--78d55f7ddc--n8xhd-" May 27 03:55:58.916034 containerd[1570]: 2025-05-27 03:55:58.789 [INFO][4344] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="71bd5531f51409429de110927b81d40d1301c9bdbcecc1c39df63bf0cee7d64e" Namespace="calico-system" Pod="goldmane-78d55f7ddc-n8xhd" WorkloadEndpoint="172--237--145--45-k8s-goldmane--78d55f7ddc--n8xhd-eth0" May 27 03:55:58.916034 containerd[1570]: 2025-05-27 03:55:58.826 [INFO][4356] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="71bd5531f51409429de110927b81d40d1301c9bdbcecc1c39df63bf0cee7d64e" HandleID="k8s-pod-network.71bd5531f51409429de110927b81d40d1301c9bdbcecc1c39df63bf0cee7d64e" Workload="172--237--145--45-k8s-goldmane--78d55f7ddc--n8xhd-eth0" May 27 03:55:58.916034 containerd[1570]: 2025-05-27 03:55:58.826 [INFO][4356] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="71bd5531f51409429de110927b81d40d1301c9bdbcecc1c39df63bf0cee7d64e" HandleID="k8s-pod-network.71bd5531f51409429de110927b81d40d1301c9bdbcecc1c39df63bf0cee7d64e" Workload="172--237--145--45-k8s-goldmane--78d55f7ddc--n8xhd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9630), Attrs:map[string]string{"namespace":"calico-system", "node":"172-237-145-45", "pod":"goldmane-78d55f7ddc-n8xhd", "timestamp":"2025-05-27 03:55:58.826610955 +0000 UTC"}, Hostname:"172-237-145-45", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:55:58.916034 containerd[1570]: 2025-05-27 03:55:58.826 [INFO][4356] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:55:58.916034 containerd[1570]: 2025-05-27 03:55:58.827 [INFO][4356] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:55:58.916034 containerd[1570]: 2025-05-27 03:55:58.827 [INFO][4356] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-145-45' May 27 03:55:58.916034 containerd[1570]: 2025-05-27 03:55:58.837 [INFO][4356] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.71bd5531f51409429de110927b81d40d1301c9bdbcecc1c39df63bf0cee7d64e" host="172-237-145-45" May 27 03:55:58.916034 containerd[1570]: 2025-05-27 03:55:58.843 [INFO][4356] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-145-45" May 27 03:55:58.916034 containerd[1570]: 2025-05-27 03:55:58.848 [INFO][4356] ipam/ipam.go 511: Trying affinity for 192.168.75.192/26 host="172-237-145-45" May 27 03:55:58.916034 containerd[1570]: 2025-05-27 03:55:58.850 [INFO][4356] ipam/ipam.go 158: Attempting to load block cidr=192.168.75.192/26 host="172-237-145-45" May 27 03:55:58.916034 containerd[1570]: 2025-05-27 03:55:58.852 [INFO][4356] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.75.192/26 host="172-237-145-45" May 27 03:55:58.916034 containerd[1570]: 2025-05-27 03:55:58.852 [INFO][4356] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.75.192/26 handle="k8s-pod-network.71bd5531f51409429de110927b81d40d1301c9bdbcecc1c39df63bf0cee7d64e" host="172-237-145-45" May 27 03:55:58.916034 containerd[1570]: 2025-05-27 03:55:58.853 [INFO][4356] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.71bd5531f51409429de110927b81d40d1301c9bdbcecc1c39df63bf0cee7d64e May 27 03:55:58.916034 containerd[1570]: 2025-05-27 03:55:58.859 [INFO][4356] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.75.192/26 handle="k8s-pod-network.71bd5531f51409429de110927b81d40d1301c9bdbcecc1c39df63bf0cee7d64e" host="172-237-145-45" May 27 03:55:58.916034 containerd[1570]: 2025-05-27 03:55:58.865 [INFO][4356] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.75.196/26] block=192.168.75.192/26 handle="k8s-pod-network.71bd5531f51409429de110927b81d40d1301c9bdbcecc1c39df63bf0cee7d64e" host="172-237-145-45" May 27 03:55:58.916034 containerd[1570]: 2025-05-27 03:55:58.865 [INFO][4356] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.75.196/26] handle="k8s-pod-network.71bd5531f51409429de110927b81d40d1301c9bdbcecc1c39df63bf0cee7d64e" host="172-237-145-45" May 27 03:55:58.916034 containerd[1570]: 2025-05-27 03:55:58.865 [INFO][4356] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:55:58.916034 containerd[1570]: 2025-05-27 03:55:58.865 [INFO][4356] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.196/26] IPv6=[] ContainerID="71bd5531f51409429de110927b81d40d1301c9bdbcecc1c39df63bf0cee7d64e" HandleID="k8s-pod-network.71bd5531f51409429de110927b81d40d1301c9bdbcecc1c39df63bf0cee7d64e" Workload="172--237--145--45-k8s-goldmane--78d55f7ddc--n8xhd-eth0" May 27 03:55:58.917640 containerd[1570]: 2025-05-27 03:55:58.868 [INFO][4344] cni-plugin/k8s.go 418: Populated endpoint ContainerID="71bd5531f51409429de110927b81d40d1301c9bdbcecc1c39df63bf0cee7d64e" Namespace="calico-system" Pod="goldmane-78d55f7ddc-n8xhd" WorkloadEndpoint="172--237--145--45-k8s-goldmane--78d55f7ddc--n8xhd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--145--45-k8s-goldmane--78d55f7ddc--n8xhd-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"305d83db-a282-47d8-aae7-29f765f39c27", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 55, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-145-45", ContainerID:"", Pod:"goldmane-78d55f7ddc-n8xhd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.75.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliab16a309a84", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:55:58.917640 containerd[1570]: 2025-05-27 03:55:58.869 [INFO][4344] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.196/32] ContainerID="71bd5531f51409429de110927b81d40d1301c9bdbcecc1c39df63bf0cee7d64e" Namespace="calico-system" Pod="goldmane-78d55f7ddc-n8xhd" WorkloadEndpoint="172--237--145--45-k8s-goldmane--78d55f7ddc--n8xhd-eth0" May 27 03:55:58.917640 containerd[1570]: 2025-05-27 03:55:58.869 [INFO][4344] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliab16a309a84 ContainerID="71bd5531f51409429de110927b81d40d1301c9bdbcecc1c39df63bf0cee7d64e" Namespace="calico-system" Pod="goldmane-78d55f7ddc-n8xhd" WorkloadEndpoint="172--237--145--45-k8s-goldmane--78d55f7ddc--n8xhd-eth0" May 27 03:55:58.917640 containerd[1570]: 2025-05-27 03:55:58.886 [INFO][4344] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="71bd5531f51409429de110927b81d40d1301c9bdbcecc1c39df63bf0cee7d64e" Namespace="calico-system" Pod="goldmane-78d55f7ddc-n8xhd" WorkloadEndpoint="172--237--145--45-k8s-goldmane--78d55f7ddc--n8xhd-eth0" May 27 03:55:58.917640 containerd[1570]: 2025-05-27 03:55:58.887 [INFO][4344] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="71bd5531f51409429de110927b81d40d1301c9bdbcecc1c39df63bf0cee7d64e" Namespace="calico-system" Pod="goldmane-78d55f7ddc-n8xhd" WorkloadEndpoint="172--237--145--45-k8s-goldmane--78d55f7ddc--n8xhd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--145--45-k8s-goldmane--78d55f7ddc--n8xhd-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"305d83db-a282-47d8-aae7-29f765f39c27", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 55, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-145-45", ContainerID:"71bd5531f51409429de110927b81d40d1301c9bdbcecc1c39df63bf0cee7d64e", Pod:"goldmane-78d55f7ddc-n8xhd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.75.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliab16a309a84", MAC:"be:1d:e0:32:6a:b7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:55:58.917640 containerd[1570]: 2025-05-27 03:55:58.906 [INFO][4344] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="71bd5531f51409429de110927b81d40d1301c9bdbcecc1c39df63bf0cee7d64e" Namespace="calico-system" Pod="goldmane-78d55f7ddc-n8xhd" WorkloadEndpoint="172--237--145--45-k8s-goldmane--78d55f7ddc--n8xhd-eth0" May 27 03:55:58.958478 containerd[1570]: time="2025-05-27T03:55:58.958391206Z" level=info msg="connecting to shim 71bd5531f51409429de110927b81d40d1301c9bdbcecc1c39df63bf0cee7d64e" address="unix:///run/containerd/s/00d3f761d117767d22e6c1be4ef8cb79f14465663140595d89fda3f88d59e469" namespace=k8s.io protocol=ttrpc version=3 May 27 03:55:59.017649 systemd[1]: Started cri-containerd-71bd5531f51409429de110927b81d40d1301c9bdbcecc1c39df63bf0cee7d64e.scope - libcontainer container 71bd5531f51409429de110927b81d40d1301c9bdbcecc1c39df63bf0cee7d64e. May 27 03:55:59.113705 containerd[1570]: time="2025-05-27T03:55:59.113667048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-n8xhd,Uid:305d83db-a282-47d8-aae7-29f765f39c27,Namespace:calico-system,Attempt:0,} returns sandbox id \"71bd5531f51409429de110927b81d40d1301c9bdbcecc1c39df63bf0cee7d64e\"" May 27 03:55:59.549330 systemd-networkd[1472]: cali95dff6645f9: Gained IPv6LL May 27 03:55:59.692107 containerd[1570]: time="2025-05-27T03:55:59.692071752Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:59.692687 containerd[1570]: time="2025-05-27T03:55:59.692662084Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=47252431" May 27 03:55:59.693603 containerd[1570]: time="2025-05-27T03:55:59.693096845Z" level=info msg="ImageCreate event name:\"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:59.695209 containerd[1570]: time="2025-05-27T03:55:59.695160900Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:59.695772 containerd[1570]: time="2025-05-27T03:55:59.695730292Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 3.734909402s" May 27 03:55:59.695814 containerd[1570]: time="2025-05-27T03:55:59.695773732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 27 03:55:59.704156 containerd[1570]: time="2025-05-27T03:55:59.704133262Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 03:55:59.712848 containerd[1570]: time="2025-05-27T03:55:59.712783094Z" level=info msg="CreateContainer within sandbox \"83cb036ecc3be99f8689f5f9ee05e796a3e52e844bc3c37917cd1887f33541d1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 27 03:55:59.728217 containerd[1570]: time="2025-05-27T03:55:59.726411809Z" level=info msg="Container ff71f716c2da52cd4dd1d80cfe6aafeb359479b6e4d86eb7acd87fabc2ff8420: CDI devices from CRI Config.CDIDevices: []" May 27 03:55:59.735924 containerd[1570]: time="2025-05-27T03:55:59.735878322Z" level=info msg="CreateContainer within sandbox \"83cb036ecc3be99f8689f5f9ee05e796a3e52e844bc3c37917cd1887f33541d1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ff71f716c2da52cd4dd1d80cfe6aafeb359479b6e4d86eb7acd87fabc2ff8420\"" May 27 03:55:59.736564 containerd[1570]: time="2025-05-27T03:55:59.736534044Z" level=info msg="StartContainer for \"ff71f716c2da52cd4dd1d80cfe6aafeb359479b6e4d86eb7acd87fabc2ff8420\"" May 27 03:55:59.737738 containerd[1570]: time="2025-05-27T03:55:59.737697968Z" level=info msg="connecting to shim ff71f716c2da52cd4dd1d80cfe6aafeb359479b6e4d86eb7acd87fabc2ff8420" address="unix:///run/containerd/s/d3d91474ecc90c86b97c4c8824e780db83182c58218801f79cd6676ab77b5aa9" protocol=ttrpc version=3 May 27 03:55:59.751054 kubelet[2692]: E0527 03:55:59.751016 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:55:59.753687 containerd[1570]: time="2025-05-27T03:55:59.753611698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4f448c9d-kp96l,Uid:4087922b-0418-477d-a845-ce56592933b9,Namespace:calico-apiserver,Attempt:0,}" May 27 03:55:59.757039 containerd[1570]: time="2025-05-27T03:55:59.756991106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pc498,Uid:974f6c14-b459-4b2b-89e5-34bfda1490bf,Namespace:calico-system,Attempt:0,}" May 27 03:55:59.757173 containerd[1570]: time="2025-05-27T03:55:59.757151317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xpzxg,Uid:929ede96-a62c-46af-99bf-c380684a0a25,Namespace:kube-system,Attempt:0,}" May 27 03:55:59.757346 containerd[1570]: time="2025-05-27T03:55:59.757258937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68b896d46-g2wtg,Uid:5900b6a9-212e-4f83-ab2e-42e0e9d86137,Namespace:calico-system,Attempt:0,}" May 27 03:55:59.764377 systemd[1]: Started cri-containerd-ff71f716c2da52cd4dd1d80cfe6aafeb359479b6e4d86eb7acd87fabc2ff8420.scope - libcontainer container ff71f716c2da52cd4dd1d80cfe6aafeb359479b6e4d86eb7acd87fabc2ff8420. May 27 03:55:59.812729 containerd[1570]: time="2025-05-27T03:55:59.812557247Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:55:59.815820 containerd[1570]: time="2025-05-27T03:55:59.815719526Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:55:59.816613 containerd[1570]: time="2025-05-27T03:55:59.816116886Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 03:55:59.817212 kubelet[2692]: E0527 03:55:59.816803 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:55:59.817212 kubelet[2692]: E0527 03:55:59.816841 2692 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:55:59.817212 kubelet[2692]: E0527 03:55:59.817149 2692 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8hxgn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-n8xhd_calico-system(305d83db-a282-47d8-aae7-29f765f39c27): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:55:59.819513 kubelet[2692]: E0527 03:55:59.819368 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-n8xhd" podUID="305d83db-a282-47d8-aae7-29f765f39c27" May 27 03:55:59.892315 kubelet[2692]: E0527 03:55:59.891345 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:55:59.892315 kubelet[2692]: E0527 03:55:59.892145 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-n8xhd" podUID="305d83db-a282-47d8-aae7-29f765f39c27" May 27 03:55:59.900485 containerd[1570]: time="2025-05-27T03:55:59.899701208Z" level=info msg="StartContainer for \"ff71f716c2da52cd4dd1d80cfe6aafeb359479b6e4d86eb7acd87fabc2ff8420\" returns successfully" May 27 03:56:00.052397 systemd-networkd[1472]: cali1ee76416581: Link UP May 27 03:56:00.055031 systemd-networkd[1472]: cali1ee76416581: Gained carrier May 27 03:56:00.073819 containerd[1570]: 2025-05-27 03:55:59.875 [INFO][4448] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--145--45-k8s-calico--apiserver--f4f448c9d--kp96l-eth0 calico-apiserver-f4f448c9d- calico-apiserver 4087922b-0418-477d-a845-ce56592933b9 819 0 2025-05-27 03:55:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f4f448c9d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-237-145-45 calico-apiserver-f4f448c9d-kp96l eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1ee76416581 [] [] }} ContainerID="c80368a80f539b3fa2d32ca2bb93f05d28768aa1c78996119d34e06aa505f838" Namespace="calico-apiserver" Pod="calico-apiserver-f4f448c9d-kp96l" WorkloadEndpoint="172--237--145--45-k8s-calico--apiserver--f4f448c9d--kp96l-" May 27 03:56:00.073819 containerd[1570]: 2025-05-27 03:55:59.876 [INFO][4448] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c80368a80f539b3fa2d32ca2bb93f05d28768aa1c78996119d34e06aa505f838" Namespace="calico-apiserver" Pod="calico-apiserver-f4f448c9d-kp96l" WorkloadEndpoint="172--237--145--45-k8s-calico--apiserver--f4f448c9d--kp96l-eth0" May 27 03:56:00.073819 containerd[1570]: 2025-05-27 03:55:59.981 [INFO][4511] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c80368a80f539b3fa2d32ca2bb93f05d28768aa1c78996119d34e06aa505f838" HandleID="k8s-pod-network.c80368a80f539b3fa2d32ca2bb93f05d28768aa1c78996119d34e06aa505f838" Workload="172--237--145--45-k8s-calico--apiserver--f4f448c9d--kp96l-eth0" May 27 03:56:00.073819 containerd[1570]: 2025-05-27 03:55:59.982 [INFO][4511] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c80368a80f539b3fa2d32ca2bb93f05d28768aa1c78996119d34e06aa505f838" HandleID="k8s-pod-network.c80368a80f539b3fa2d32ca2bb93f05d28768aa1c78996119d34e06aa505f838" Workload="172--237--145--45-k8s-calico--apiserver--f4f448c9d--kp96l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000123d90), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-237-145-45", "pod":"calico-apiserver-f4f448c9d-kp96l", "timestamp":"2025-05-27 03:55:59.977856875 +0000 UTC"}, Hostname:"172-237-145-45", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:56:00.073819 containerd[1570]: 2025-05-27 03:55:59.982 [INFO][4511] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:56:00.073819 containerd[1570]: 2025-05-27 03:55:59.984 [INFO][4511] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:56:00.073819 containerd[1570]: 2025-05-27 03:55:59.984 [INFO][4511] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-145-45' May 27 03:56:00.073819 containerd[1570]: 2025-05-27 03:56:00.000 [INFO][4511] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c80368a80f539b3fa2d32ca2bb93f05d28768aa1c78996119d34e06aa505f838" host="172-237-145-45" May 27 03:56:00.073819 containerd[1570]: 2025-05-27 03:56:00.010 [INFO][4511] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-145-45" May 27 03:56:00.073819 containerd[1570]: 2025-05-27 03:56:00.018 [INFO][4511] ipam/ipam.go 511: Trying affinity for 192.168.75.192/26 host="172-237-145-45" May 27 03:56:00.073819 containerd[1570]: 2025-05-27 03:56:00.022 [INFO][4511] ipam/ipam.go 158: Attempting to load block cidr=192.168.75.192/26 host="172-237-145-45" May 27 03:56:00.073819 containerd[1570]: 2025-05-27 03:56:00.024 [INFO][4511] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.75.192/26 host="172-237-145-45" May 27 03:56:00.073819 containerd[1570]: 2025-05-27 03:56:00.025 [INFO][4511] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.75.192/26 handle="k8s-pod-network.c80368a80f539b3fa2d32ca2bb93f05d28768aa1c78996119d34e06aa505f838" host="172-237-145-45" May 27 03:56:00.073819 containerd[1570]: 2025-05-27 03:56:00.026 [INFO][4511] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c80368a80f539b3fa2d32ca2bb93f05d28768aa1c78996119d34e06aa505f838 May 27 03:56:00.073819 containerd[1570]: 2025-05-27 03:56:00.029 [INFO][4511] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.75.192/26 handle="k8s-pod-network.c80368a80f539b3fa2d32ca2bb93f05d28768aa1c78996119d34e06aa505f838" host="172-237-145-45" May 27 03:56:00.073819 containerd[1570]: 2025-05-27 03:56:00.041 [INFO][4511] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.75.197/26] block=192.168.75.192/26 handle="k8s-pod-network.c80368a80f539b3fa2d32ca2bb93f05d28768aa1c78996119d34e06aa505f838" host="172-237-145-45" May 27 03:56:00.073819 containerd[1570]: 2025-05-27 03:56:00.041 [INFO][4511] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.75.197/26] handle="k8s-pod-network.c80368a80f539b3fa2d32ca2bb93f05d28768aa1c78996119d34e06aa505f838" host="172-237-145-45" May 27 03:56:00.073819 containerd[1570]: 2025-05-27 03:56:00.041 [INFO][4511] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:56:00.073819 containerd[1570]: 2025-05-27 03:56:00.041 [INFO][4511] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.197/26] IPv6=[] ContainerID="c80368a80f539b3fa2d32ca2bb93f05d28768aa1c78996119d34e06aa505f838" HandleID="k8s-pod-network.c80368a80f539b3fa2d32ca2bb93f05d28768aa1c78996119d34e06aa505f838" Workload="172--237--145--45-k8s-calico--apiserver--f4f448c9d--kp96l-eth0" May 27 03:56:00.074524 containerd[1570]: 2025-05-27 03:56:00.046 [INFO][4448] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c80368a80f539b3fa2d32ca2bb93f05d28768aa1c78996119d34e06aa505f838" Namespace="calico-apiserver" Pod="calico-apiserver-f4f448c9d-kp96l" WorkloadEndpoint="172--237--145--45-k8s-calico--apiserver--f4f448c9d--kp96l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--145--45-k8s-calico--apiserver--f4f448c9d--kp96l-eth0", GenerateName:"calico-apiserver-f4f448c9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"4087922b-0418-477d-a845-ce56592933b9", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 55, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f4f448c9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-145-45", ContainerID:"", Pod:"calico-apiserver-f4f448c9d-kp96l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1ee76416581", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:56:00.074524 containerd[1570]: 2025-05-27 03:56:00.046 [INFO][4448] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.197/32] ContainerID="c80368a80f539b3fa2d32ca2bb93f05d28768aa1c78996119d34e06aa505f838" Namespace="calico-apiserver" Pod="calico-apiserver-f4f448c9d-kp96l" WorkloadEndpoint="172--237--145--45-k8s-calico--apiserver--f4f448c9d--kp96l-eth0" May 27 03:56:00.074524 containerd[1570]: 2025-05-27 03:56:00.046 [INFO][4448] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1ee76416581 ContainerID="c80368a80f539b3fa2d32ca2bb93f05d28768aa1c78996119d34e06aa505f838" Namespace="calico-apiserver" Pod="calico-apiserver-f4f448c9d-kp96l" WorkloadEndpoint="172--237--145--45-k8s-calico--apiserver--f4f448c9d--kp96l-eth0" May 27 03:56:00.074524 containerd[1570]: 2025-05-27 03:56:00.056 [INFO][4448] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c80368a80f539b3fa2d32ca2bb93f05d28768aa1c78996119d34e06aa505f838" Namespace="calico-apiserver" Pod="calico-apiserver-f4f448c9d-kp96l" WorkloadEndpoint="172--237--145--45-k8s-calico--apiserver--f4f448c9d--kp96l-eth0" May 27 03:56:00.074524 containerd[1570]: 2025-05-27 03:56:00.057 [INFO][4448] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c80368a80f539b3fa2d32ca2bb93f05d28768aa1c78996119d34e06aa505f838" Namespace="calico-apiserver" Pod="calico-apiserver-f4f448c9d-kp96l" WorkloadEndpoint="172--237--145--45-k8s-calico--apiserver--f4f448c9d--kp96l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--145--45-k8s-calico--apiserver--f4f448c9d--kp96l-eth0", GenerateName:"calico-apiserver-f4f448c9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"4087922b-0418-477d-a845-ce56592933b9", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 55, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f4f448c9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-145-45", ContainerID:"c80368a80f539b3fa2d32ca2bb93f05d28768aa1c78996119d34e06aa505f838", Pod:"calico-apiserver-f4f448c9d-kp96l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1ee76416581", MAC:"c2:47:3a:8e:9f:1b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:56:00.074524 containerd[1570]: 2025-05-27 03:56:00.067 [INFO][4448] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c80368a80f539b3fa2d32ca2bb93f05d28768aa1c78996119d34e06aa505f838" Namespace="calico-apiserver" Pod="calico-apiserver-f4f448c9d-kp96l" WorkloadEndpoint="172--237--145--45-k8s-calico--apiserver--f4f448c9d--kp96l-eth0" May 27 03:56:00.117546 containerd[1570]: time="2025-05-27T03:56:00.117433537Z" level=info msg="connecting to shim c80368a80f539b3fa2d32ca2bb93f05d28768aa1c78996119d34e06aa505f838" address="unix:///run/containerd/s/f7472fd918d0e9992cc3d33567d2b7e4f68f5810c046cf9fdb1cb52f91cb4e81" namespace=k8s.io protocol=ttrpc version=3 May 27 03:56:00.155456 systemd[1]: Started cri-containerd-c80368a80f539b3fa2d32ca2bb93f05d28768aa1c78996119d34e06aa505f838.scope - libcontainer container c80368a80f539b3fa2d32ca2bb93f05d28768aa1c78996119d34e06aa505f838. May 27 03:56:00.169694 systemd-networkd[1472]: cali9f58cc585b5: Link UP May 27 03:56:00.170560 systemd-networkd[1472]: cali9f58cc585b5: Gained carrier May 27 03:56:00.189365 containerd[1570]: 2025-05-27 03:55:59.923 [INFO][4459] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--145--45-k8s-coredns--674b8bbfcf--xpzxg-eth0 coredns-674b8bbfcf- kube-system 929ede96-a62c-46af-99bf-c380684a0a25 804 0 2025-05-27 03:55:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-237-145-45 coredns-674b8bbfcf-xpzxg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9f58cc585b5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0864c4bacfeb51698d0dc90423081ebcc982736ec3da48f008da4e68b44f63d9" Namespace="kube-system" Pod="coredns-674b8bbfcf-xpzxg" WorkloadEndpoint="172--237--145--45-k8s-coredns--674b8bbfcf--xpzxg-" May 27 03:56:00.189365 containerd[1570]: 2025-05-27 03:55:59.924 [INFO][4459] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0864c4bacfeb51698d0dc90423081ebcc982736ec3da48f008da4e68b44f63d9" Namespace="kube-system" Pod="coredns-674b8bbfcf-xpzxg" WorkloadEndpoint="172--237--145--45-k8s-coredns--674b8bbfcf--xpzxg-eth0" May 27 03:56:00.189365 containerd[1570]: 2025-05-27 03:55:59.993 [INFO][4523] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0864c4bacfeb51698d0dc90423081ebcc982736ec3da48f008da4e68b44f63d9" HandleID="k8s-pod-network.0864c4bacfeb51698d0dc90423081ebcc982736ec3da48f008da4e68b44f63d9" Workload="172--237--145--45-k8s-coredns--674b8bbfcf--xpzxg-eth0" May 27 03:56:00.189365 containerd[1570]: 2025-05-27 03:55:59.993 [INFO][4523] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0864c4bacfeb51698d0dc90423081ebcc982736ec3da48f008da4e68b44f63d9" HandleID="k8s-pod-network.0864c4bacfeb51698d0dc90423081ebcc982736ec3da48f008da4e68b44f63d9" Workload="172--237--145--45-k8s-coredns--674b8bbfcf--xpzxg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000397a80), Attrs:map[string]string{"namespace":"kube-system", "node":"172-237-145-45", "pod":"coredns-674b8bbfcf-xpzxg", "timestamp":"2025-05-27 03:55:59.993173025 +0000 UTC"}, Hostname:"172-237-145-45", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:56:00.189365 containerd[1570]: 2025-05-27 03:55:59.993 [INFO][4523] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:56:00.189365 containerd[1570]: 2025-05-27 03:56:00.041 [INFO][4523] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:56:00.189365 containerd[1570]: 2025-05-27 03:56:00.041 [INFO][4523] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-145-45' May 27 03:56:00.189365 containerd[1570]: 2025-05-27 03:56:00.102 [INFO][4523] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0864c4bacfeb51698d0dc90423081ebcc982736ec3da48f008da4e68b44f63d9" host="172-237-145-45" May 27 03:56:00.189365 containerd[1570]: 2025-05-27 03:56:00.111 [INFO][4523] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-145-45" May 27 03:56:00.189365 containerd[1570]: 2025-05-27 03:56:00.119 [INFO][4523] ipam/ipam.go 511: Trying affinity for 192.168.75.192/26 host="172-237-145-45" May 27 03:56:00.189365 containerd[1570]: 2025-05-27 03:56:00.121 [INFO][4523] ipam/ipam.go 158: Attempting to load block cidr=192.168.75.192/26 host="172-237-145-45" May 27 03:56:00.189365 containerd[1570]: 2025-05-27 03:56:00.126 [INFO][4523] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.75.192/26 host="172-237-145-45" May 27 03:56:00.189365 containerd[1570]: 2025-05-27 03:56:00.126 [INFO][4523] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.75.192/26 handle="k8s-pod-network.0864c4bacfeb51698d0dc90423081ebcc982736ec3da48f008da4e68b44f63d9" host="172-237-145-45" May 27 03:56:00.189365 containerd[1570]: 2025-05-27 03:56:00.129 [INFO][4523] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0864c4bacfeb51698d0dc90423081ebcc982736ec3da48f008da4e68b44f63d9 May 27 03:56:00.189365 containerd[1570]: 2025-05-27 03:56:00.135 [INFO][4523] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.75.192/26 handle="k8s-pod-network.0864c4bacfeb51698d0dc90423081ebcc982736ec3da48f008da4e68b44f63d9" host="172-237-145-45" May 27 03:56:00.189365 containerd[1570]: 2025-05-27 03:56:00.142 [INFO][4523] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.75.198/26] block=192.168.75.192/26 handle="k8s-pod-network.0864c4bacfeb51698d0dc90423081ebcc982736ec3da48f008da4e68b44f63d9" host="172-237-145-45" May 27 03:56:00.189365 containerd[1570]: 2025-05-27 03:56:00.142 [INFO][4523] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.75.198/26] handle="k8s-pod-network.0864c4bacfeb51698d0dc90423081ebcc982736ec3da48f008da4e68b44f63d9" host="172-237-145-45" May 27 03:56:00.189365 containerd[1570]: 2025-05-27 03:56:00.143 [INFO][4523] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:56:00.189365 containerd[1570]: 2025-05-27 03:56:00.143 [INFO][4523] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.198/26] IPv6=[] ContainerID="0864c4bacfeb51698d0dc90423081ebcc982736ec3da48f008da4e68b44f63d9" HandleID="k8s-pod-network.0864c4bacfeb51698d0dc90423081ebcc982736ec3da48f008da4e68b44f63d9" Workload="172--237--145--45-k8s-coredns--674b8bbfcf--xpzxg-eth0" May 27 03:56:00.189812 containerd[1570]: 2025-05-27 03:56:00.147 [INFO][4459] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0864c4bacfeb51698d0dc90423081ebcc982736ec3da48f008da4e68b44f63d9" Namespace="kube-system" Pod="coredns-674b8bbfcf-xpzxg" WorkloadEndpoint="172--237--145--45-k8s-coredns--674b8bbfcf--xpzxg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--145--45-k8s-coredns--674b8bbfcf--xpzxg-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"929ede96-a62c-46af-99bf-c380684a0a25", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 55, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-145-45", ContainerID:"", Pod:"coredns-674b8bbfcf-xpzxg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9f58cc585b5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:56:00.189812 containerd[1570]: 2025-05-27 03:56:00.147 [INFO][4459] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.198/32] ContainerID="0864c4bacfeb51698d0dc90423081ebcc982736ec3da48f008da4e68b44f63d9" Namespace="kube-system" Pod="coredns-674b8bbfcf-xpzxg" WorkloadEndpoint="172--237--145--45-k8s-coredns--674b8bbfcf--xpzxg-eth0" May 27 03:56:00.189812 containerd[1570]: 2025-05-27 03:56:00.147 [INFO][4459] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9f58cc585b5 ContainerID="0864c4bacfeb51698d0dc90423081ebcc982736ec3da48f008da4e68b44f63d9" Namespace="kube-system" Pod="coredns-674b8bbfcf-xpzxg" WorkloadEndpoint="172--237--145--45-k8s-coredns--674b8bbfcf--xpzxg-eth0" May 27 03:56:00.189812 containerd[1570]: 2025-05-27 03:56:00.171 [INFO][4459] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0864c4bacfeb51698d0dc90423081ebcc982736ec3da48f008da4e68b44f63d9" Namespace="kube-system" Pod="coredns-674b8bbfcf-xpzxg" WorkloadEndpoint="172--237--145--45-k8s-coredns--674b8bbfcf--xpzxg-eth0" May 27 03:56:00.189812 containerd[1570]: 2025-05-27 03:56:00.172 [INFO][4459] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0864c4bacfeb51698d0dc90423081ebcc982736ec3da48f008da4e68b44f63d9" Namespace="kube-system" Pod="coredns-674b8bbfcf-xpzxg" WorkloadEndpoint="172--237--145--45-k8s-coredns--674b8bbfcf--xpzxg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--145--45-k8s-coredns--674b8bbfcf--xpzxg-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"929ede96-a62c-46af-99bf-c380684a0a25", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 55, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-145-45", ContainerID:"0864c4bacfeb51698d0dc90423081ebcc982736ec3da48f008da4e68b44f63d9", Pod:"coredns-674b8bbfcf-xpzxg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9f58cc585b5", MAC:"62:a7:09:65:3e:41", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:56:00.189812 containerd[1570]: 2025-05-27 03:56:00.184 [INFO][4459] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0864c4bacfeb51698d0dc90423081ebcc982736ec3da48f008da4e68b44f63d9" Namespace="kube-system" Pod="coredns-674b8bbfcf-xpzxg" WorkloadEndpoint="172--237--145--45-k8s-coredns--674b8bbfcf--xpzxg-eth0" May 27 03:56:00.219373 containerd[1570]: time="2025-05-27T03:56:00.218892045Z" level=info msg="connecting to shim 0864c4bacfeb51698d0dc90423081ebcc982736ec3da48f008da4e68b44f63d9" address="unix:///run/containerd/s/12b8286579520543e6315b6d2f88c5f522e70a5de1c15149aced0f0622d36783" namespace=k8s.io protocol=ttrpc version=3 May 27 03:56:00.247335 systemd[1]: Started cri-containerd-0864c4bacfeb51698d0dc90423081ebcc982736ec3da48f008da4e68b44f63d9.scope - libcontainer container 0864c4bacfeb51698d0dc90423081ebcc982736ec3da48f008da4e68b44f63d9. May 27 03:56:00.287911 systemd-networkd[1472]: cali7f3c5b12f40: Link UP May 27 03:56:00.289301 systemd-networkd[1472]: cali7f3c5b12f40: Gained carrier May 27 03:56:00.333231 containerd[1570]: 2025-05-27 03:55:59.940 [INFO][4476] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--145--45-k8s-calico--kube--controllers--68b896d46--g2wtg-eth0 calico-kube-controllers-68b896d46- calico-system 5900b6a9-212e-4f83-ab2e-42e0e9d86137 820 0 2025-05-27 03:55:39 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:68b896d46 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-237-145-45 calico-kube-controllers-68b896d46-g2wtg eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7f3c5b12f40 [] [] }} ContainerID="cf43563f2ab3bb932c7a0801f40bbd94437e76855994ef2e9862dfb5231a1059" Namespace="calico-system" Pod="calico-kube-controllers-68b896d46-g2wtg" WorkloadEndpoint="172--237--145--45-k8s-calico--kube--controllers--68b896d46--g2wtg-" May 27 03:56:00.333231 containerd[1570]: 2025-05-27 03:55:59.941 [INFO][4476] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cf43563f2ab3bb932c7a0801f40bbd94437e76855994ef2e9862dfb5231a1059" Namespace="calico-system" Pod="calico-kube-controllers-68b896d46-g2wtg" WorkloadEndpoint="172--237--145--45-k8s-calico--kube--controllers--68b896d46--g2wtg-eth0" May 27 03:56:00.333231 containerd[1570]: 2025-05-27 03:56:00.018 [INFO][4532] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cf43563f2ab3bb932c7a0801f40bbd94437e76855994ef2e9862dfb5231a1059" HandleID="k8s-pod-network.cf43563f2ab3bb932c7a0801f40bbd94437e76855994ef2e9862dfb5231a1059" Workload="172--237--145--45-k8s-calico--kube--controllers--68b896d46--g2wtg-eth0" May 27 03:56:00.333231 containerd[1570]: 2025-05-27 03:56:00.019 [INFO][4532] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cf43563f2ab3bb932c7a0801f40bbd94437e76855994ef2e9862dfb5231a1059" HandleID="k8s-pod-network.cf43563f2ab3bb932c7a0801f40bbd94437e76855994ef2e9862dfb5231a1059" Workload="172--237--145--45-k8s-calico--kube--controllers--68b896d46--g2wtg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003899b0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-237-145-45", "pod":"calico-kube-controllers-68b896d46-g2wtg", "timestamp":"2025-05-27 03:56:00.018141154 +0000 UTC"}, Hostname:"172-237-145-45", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:56:00.333231 containerd[1570]: 2025-05-27 03:56:00.019 [INFO][4532] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:56:00.333231 containerd[1570]: 2025-05-27 03:56:00.142 [INFO][4532] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:56:00.333231 containerd[1570]: 2025-05-27 03:56:00.142 [INFO][4532] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-145-45' May 27 03:56:00.333231 containerd[1570]: 2025-05-27 03:56:00.203 [INFO][4532] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cf43563f2ab3bb932c7a0801f40bbd94437e76855994ef2e9862dfb5231a1059" host="172-237-145-45" May 27 03:56:00.333231 containerd[1570]: 2025-05-27 03:56:00.240 [INFO][4532] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-145-45" May 27 03:56:00.333231 containerd[1570]: 2025-05-27 03:56:00.248 [INFO][4532] ipam/ipam.go 511: Trying affinity for 192.168.75.192/26 host="172-237-145-45" May 27 03:56:00.333231 containerd[1570]: 2025-05-27 03:56:00.251 [INFO][4532] ipam/ipam.go 158: Attempting to load block cidr=192.168.75.192/26 host="172-237-145-45" May 27 03:56:00.333231 containerd[1570]: 2025-05-27 03:56:00.255 [INFO][4532] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.75.192/26 host="172-237-145-45" May 27 03:56:00.333231 containerd[1570]: 2025-05-27 03:56:00.255 [INFO][4532] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.75.192/26 handle="k8s-pod-network.cf43563f2ab3bb932c7a0801f40bbd94437e76855994ef2e9862dfb5231a1059" host="172-237-145-45" May 27 03:56:00.333231 containerd[1570]: 2025-05-27 03:56:00.259 [INFO][4532] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cf43563f2ab3bb932c7a0801f40bbd94437e76855994ef2e9862dfb5231a1059 May 27 03:56:00.333231 containerd[1570]: 2025-05-27 03:56:00.265 [INFO][4532] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.75.192/26 handle="k8s-pod-network.cf43563f2ab3bb932c7a0801f40bbd94437e76855994ef2e9862dfb5231a1059" host="172-237-145-45" May 27 03:56:00.333231 containerd[1570]: 2025-05-27 03:56:00.273 [INFO][4532] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.75.199/26] block=192.168.75.192/26 handle="k8s-pod-network.cf43563f2ab3bb932c7a0801f40bbd94437e76855994ef2e9862dfb5231a1059" host="172-237-145-45" May 27 03:56:00.333231 containerd[1570]: 2025-05-27 03:56:00.273 [INFO][4532] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.75.199/26] handle="k8s-pod-network.cf43563f2ab3bb932c7a0801f40bbd94437e76855994ef2e9862dfb5231a1059" host="172-237-145-45" May 27 03:56:00.333231 containerd[1570]: 2025-05-27 03:56:00.274 [INFO][4532] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:56:00.333231 containerd[1570]: 2025-05-27 03:56:00.275 [INFO][4532] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.199/26] IPv6=[] ContainerID="cf43563f2ab3bb932c7a0801f40bbd94437e76855994ef2e9862dfb5231a1059" HandleID="k8s-pod-network.cf43563f2ab3bb932c7a0801f40bbd94437e76855994ef2e9862dfb5231a1059" Workload="172--237--145--45-k8s-calico--kube--controllers--68b896d46--g2wtg-eth0" May 27 03:56:00.333728 containerd[1570]: 2025-05-27 03:56:00.285 [INFO][4476] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cf43563f2ab3bb932c7a0801f40bbd94437e76855994ef2e9862dfb5231a1059" Namespace="calico-system" Pod="calico-kube-controllers-68b896d46-g2wtg" WorkloadEndpoint="172--237--145--45-k8s-calico--kube--controllers--68b896d46--g2wtg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--145--45-k8s-calico--kube--controllers--68b896d46--g2wtg-eth0", GenerateName:"calico-kube-controllers-68b896d46-", Namespace:"calico-system", SelfLink:"", UID:"5900b6a9-212e-4f83-ab2e-42e0e9d86137", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 55, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68b896d46", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-145-45", ContainerID:"", Pod:"calico-kube-controllers-68b896d46-g2wtg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.75.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7f3c5b12f40", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:56:00.333728 containerd[1570]: 2025-05-27 03:56:00.285 [INFO][4476] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.199/32] ContainerID="cf43563f2ab3bb932c7a0801f40bbd94437e76855994ef2e9862dfb5231a1059" Namespace="calico-system" Pod="calico-kube-controllers-68b896d46-g2wtg" WorkloadEndpoint="172--237--145--45-k8s-calico--kube--controllers--68b896d46--g2wtg-eth0" May 27 03:56:00.333728 containerd[1570]: 2025-05-27 03:56:00.285 [INFO][4476] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7f3c5b12f40 ContainerID="cf43563f2ab3bb932c7a0801f40bbd94437e76855994ef2e9862dfb5231a1059" Namespace="calico-system" Pod="calico-kube-controllers-68b896d46-g2wtg" WorkloadEndpoint="172--237--145--45-k8s-calico--kube--controllers--68b896d46--g2wtg-eth0" May 27 03:56:00.333728 containerd[1570]: 2025-05-27 03:56:00.287 [INFO][4476] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cf43563f2ab3bb932c7a0801f40bbd94437e76855994ef2e9862dfb5231a1059" Namespace="calico-system" Pod="calico-kube-controllers-68b896d46-g2wtg" WorkloadEndpoint="172--237--145--45-k8s-calico--kube--controllers--68b896d46--g2wtg-eth0" May 27 03:56:00.333728 containerd[1570]: 2025-05-27 03:56:00.288 [INFO][4476] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cf43563f2ab3bb932c7a0801f40bbd94437e76855994ef2e9862dfb5231a1059" Namespace="calico-system" Pod="calico-kube-controllers-68b896d46-g2wtg" WorkloadEndpoint="172--237--145--45-k8s-calico--kube--controllers--68b896d46--g2wtg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--145--45-k8s-calico--kube--controllers--68b896d46--g2wtg-eth0", GenerateName:"calico-kube-controllers-68b896d46-", Namespace:"calico-system", SelfLink:"", UID:"5900b6a9-212e-4f83-ab2e-42e0e9d86137", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 55, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68b896d46", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-145-45", ContainerID:"cf43563f2ab3bb932c7a0801f40bbd94437e76855994ef2e9862dfb5231a1059", Pod:"calico-kube-controllers-68b896d46-g2wtg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.75.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7f3c5b12f40", MAC:"2e:38:ca:4a:23:11", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:56:00.333728 containerd[1570]: 2025-05-27 03:56:00.314 [INFO][4476] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cf43563f2ab3bb932c7a0801f40bbd94437e76855994ef2e9862dfb5231a1059" Namespace="calico-system" Pod="calico-kube-controllers-68b896d46-g2wtg" WorkloadEndpoint="172--237--145--45-k8s-calico--kube--controllers--68b896d46--g2wtg-eth0" May 27 03:56:00.381364 containerd[1570]: time="2025-05-27T03:56:00.381289545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4f448c9d-kp96l,Uid:4087922b-0418-477d-a845-ce56592933b9,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c80368a80f539b3fa2d32ca2bb93f05d28768aa1c78996119d34e06aa505f838\"" May 27 03:56:00.390539 containerd[1570]: time="2025-05-27T03:56:00.390518367Z" level=info msg="CreateContainer within sandbox \"c80368a80f539b3fa2d32ca2bb93f05d28768aa1c78996119d34e06aa505f838\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 27 03:56:00.396371 containerd[1570]: time="2025-05-27T03:56:00.396339130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xpzxg,Uid:929ede96-a62c-46af-99bf-c380684a0a25,Namespace:kube-system,Attempt:0,} returns sandbox id \"0864c4bacfeb51698d0dc90423081ebcc982736ec3da48f008da4e68b44f63d9\"" May 27 03:56:00.400714 kubelet[2692]: E0527 03:56:00.400663 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:56:00.406313 containerd[1570]: time="2025-05-27T03:56:00.406246513Z" level=info msg="Container e55c5eacf43e1355c6d255c48a9463be33550a0b3c0342e1eb6dc041c0746d60: CDI devices from CRI Config.CDIDevices: []" May 27 03:56:00.411297 containerd[1570]: time="2025-05-27T03:56:00.411274816Z" level=info msg="CreateContainer within sandbox \"0864c4bacfeb51698d0dc90423081ebcc982736ec3da48f008da4e68b44f63d9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 03:56:00.411872 containerd[1570]: time="2025-05-27T03:56:00.411279006Z" level=info msg="connecting to shim cf43563f2ab3bb932c7a0801f40bbd94437e76855994ef2e9862dfb5231a1059" address="unix:///run/containerd/s/0ac63a0fb96a9a75f22ba84aff8f73ef383c69eda0080c9f8697331adb2ed24b" namespace=k8s.io protocol=ttrpc version=3 May 27 03:56:00.415696 containerd[1570]: time="2025-05-27T03:56:00.415641166Z" level=info msg="CreateContainer within sandbox \"c80368a80f539b3fa2d32ca2bb93f05d28768aa1c78996119d34e06aa505f838\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e55c5eacf43e1355c6d255c48a9463be33550a0b3c0342e1eb6dc041c0746d60\"" May 27 03:56:00.419182 containerd[1570]: time="2025-05-27T03:56:00.419142954Z" level=info msg="StartContainer for \"e55c5eacf43e1355c6d255c48a9463be33550a0b3c0342e1eb6dc041c0746d60\"" May 27 03:56:00.427303 containerd[1570]: time="2025-05-27T03:56:00.425611769Z" level=info msg="connecting to shim e55c5eacf43e1355c6d255c48a9463be33550a0b3c0342e1eb6dc041c0746d60" address="unix:///run/containerd/s/f7472fd918d0e9992cc3d33567d2b7e4f68f5810c046cf9fdb1cb52f91cb4e81" protocol=ttrpc version=3 May 27 03:56:00.444416 containerd[1570]: time="2025-05-27T03:56:00.444131022Z" level=info msg="Container 5023444947959b5de5dd0ca0ae99264be085bf32308f720868577a8196e75577: CDI devices from CRI Config.CDIDevices: []" May 27 03:56:00.448934 systemd-networkd[1472]: calie646dbe29de: Link UP May 27 03:56:00.450452 systemd-networkd[1472]: calie646dbe29de: Gained carrier May 27 03:56:00.456829 containerd[1570]: time="2025-05-27T03:56:00.456683172Z" level=info msg="CreateContainer within sandbox \"0864c4bacfeb51698d0dc90423081ebcc982736ec3da48f008da4e68b44f63d9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5023444947959b5de5dd0ca0ae99264be085bf32308f720868577a8196e75577\"" May 27 03:56:00.458531 containerd[1570]: time="2025-05-27T03:56:00.458503126Z" level=info msg="StartContainer for \"5023444947959b5de5dd0ca0ae99264be085bf32308f720868577a8196e75577\"" May 27 03:56:00.460668 containerd[1570]: time="2025-05-27T03:56:00.460622481Z" level=info msg="connecting to shim 5023444947959b5de5dd0ca0ae99264be085bf32308f720868577a8196e75577" address="unix:///run/containerd/s/12b8286579520543e6315b6d2f88c5f522e70a5de1c15149aced0f0622d36783" protocol=ttrpc version=3 May 27 03:56:00.477573 containerd[1570]: 2025-05-27 03:55:59.923 [INFO][4455] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--145--45-k8s-csi--node--driver--pc498-eth0 csi-node-driver- calico-system 974f6c14-b459-4b2b-89e5-34bfda1490bf 722 0 2025-05-27 03:55:39 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78f6f74485 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-237-145-45 csi-node-driver-pc498 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie646dbe29de [] [] }} ContainerID="4d0e5db8fb92fb8dd388897bf0d71edce41b962351c39806e73dcfee715e3bf7" Namespace="calico-system" Pod="csi-node-driver-pc498" WorkloadEndpoint="172--237--145--45-k8s-csi--node--driver--pc498-" May 27 03:56:00.477573 containerd[1570]: 2025-05-27 03:55:59.928 [INFO][4455] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4d0e5db8fb92fb8dd388897bf0d71edce41b962351c39806e73dcfee715e3bf7" Namespace="calico-system" Pod="csi-node-driver-pc498" WorkloadEndpoint="172--237--145--45-k8s-csi--node--driver--pc498-eth0" May 27 03:56:00.477573 containerd[1570]: 2025-05-27 03:56:00.020 [INFO][4526] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4d0e5db8fb92fb8dd388897bf0d71edce41b962351c39806e73dcfee715e3bf7" HandleID="k8s-pod-network.4d0e5db8fb92fb8dd388897bf0d71edce41b962351c39806e73dcfee715e3bf7" Workload="172--237--145--45-k8s-csi--node--driver--pc498-eth0" May 27 03:56:00.477573 containerd[1570]: 2025-05-27 03:56:00.020 [INFO][4526] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4d0e5db8fb92fb8dd388897bf0d71edce41b962351c39806e73dcfee715e3bf7" HandleID="k8s-pod-network.4d0e5db8fb92fb8dd388897bf0d71edce41b962351c39806e73dcfee715e3bf7" Workload="172--237--145--45-k8s-csi--node--driver--pc498-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031f6c0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-237-145-45", "pod":"csi-node-driver-pc498", "timestamp":"2025-05-27 03:56:00.02058853 +0000 UTC"}, Hostname:"172-237-145-45", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:56:00.477573 containerd[1570]: 2025-05-27 03:56:00.020 [INFO][4526] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:56:00.477573 containerd[1570]: 2025-05-27 03:56:00.273 [INFO][4526] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:56:00.477573 containerd[1570]: 2025-05-27 03:56:00.273 [INFO][4526] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-145-45' May 27 03:56:00.477573 containerd[1570]: 2025-05-27 03:56:00.314 [INFO][4526] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4d0e5db8fb92fb8dd388897bf0d71edce41b962351c39806e73dcfee715e3bf7" host="172-237-145-45" May 27 03:56:00.477573 containerd[1570]: 2025-05-27 03:56:00.331 [INFO][4526] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-145-45" May 27 03:56:00.477573 containerd[1570]: 2025-05-27 03:56:00.354 [INFO][4526] ipam/ipam.go 511: Trying affinity for 192.168.75.192/26 host="172-237-145-45" May 27 03:56:00.477573 containerd[1570]: 2025-05-27 03:56:00.358 [INFO][4526] ipam/ipam.go 158: Attempting to load block cidr=192.168.75.192/26 host="172-237-145-45" May 27 03:56:00.477573 containerd[1570]: 2025-05-27 03:56:00.364 [INFO][4526] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.75.192/26 host="172-237-145-45" May 27 03:56:00.477573 containerd[1570]: 2025-05-27 03:56:00.365 [INFO][4526] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.75.192/26 handle="k8s-pod-network.4d0e5db8fb92fb8dd388897bf0d71edce41b962351c39806e73dcfee715e3bf7" host="172-237-145-45" May 27 03:56:00.477573 containerd[1570]: 2025-05-27 03:56:00.376 [INFO][4526] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4d0e5db8fb92fb8dd388897bf0d71edce41b962351c39806e73dcfee715e3bf7 May 27 03:56:00.477573 containerd[1570]: 2025-05-27 03:56:00.394 [INFO][4526] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.75.192/26 handle="k8s-pod-network.4d0e5db8fb92fb8dd388897bf0d71edce41b962351c39806e73dcfee715e3bf7" host="172-237-145-45" May 27 03:56:00.477573 containerd[1570]: 2025-05-27 03:56:00.410 [INFO][4526] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.75.200/26] block=192.168.75.192/26 handle="k8s-pod-network.4d0e5db8fb92fb8dd388897bf0d71edce41b962351c39806e73dcfee715e3bf7" host="172-237-145-45" May 27 03:56:00.477573 containerd[1570]: 2025-05-27 03:56:00.410 [INFO][4526] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.75.200/26] handle="k8s-pod-network.4d0e5db8fb92fb8dd388897bf0d71edce41b962351c39806e73dcfee715e3bf7" host="172-237-145-45" May 27 03:56:00.477573 containerd[1570]: 2025-05-27 03:56:00.410 [INFO][4526] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:56:00.477573 containerd[1570]: 2025-05-27 03:56:00.410 [INFO][4526] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.200/26] IPv6=[] ContainerID="4d0e5db8fb92fb8dd388897bf0d71edce41b962351c39806e73dcfee715e3bf7" HandleID="k8s-pod-network.4d0e5db8fb92fb8dd388897bf0d71edce41b962351c39806e73dcfee715e3bf7" Workload="172--237--145--45-k8s-csi--node--driver--pc498-eth0" May 27 03:56:00.478295 containerd[1570]: 2025-05-27 03:56:00.422 [INFO][4455] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4d0e5db8fb92fb8dd388897bf0d71edce41b962351c39806e73dcfee715e3bf7" Namespace="calico-system" Pod="csi-node-driver-pc498" WorkloadEndpoint="172--237--145--45-k8s-csi--node--driver--pc498-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--145--45-k8s-csi--node--driver--pc498-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"974f6c14-b459-4b2b-89e5-34bfda1490bf", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 55, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-145-45", ContainerID:"", Pod:"csi-node-driver-pc498", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.75.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie646dbe29de", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:56:00.478295 containerd[1570]: 2025-05-27 03:56:00.422 [INFO][4455] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.200/32] ContainerID="4d0e5db8fb92fb8dd388897bf0d71edce41b962351c39806e73dcfee715e3bf7" Namespace="calico-system" Pod="csi-node-driver-pc498" WorkloadEndpoint="172--237--145--45-k8s-csi--node--driver--pc498-eth0" May 27 03:56:00.478295 containerd[1570]: 2025-05-27 03:56:00.422 [INFO][4455] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie646dbe29de ContainerID="4d0e5db8fb92fb8dd388897bf0d71edce41b962351c39806e73dcfee715e3bf7" Namespace="calico-system" Pod="csi-node-driver-pc498" WorkloadEndpoint="172--237--145--45-k8s-csi--node--driver--pc498-eth0" May 27 03:56:00.478295 containerd[1570]: 2025-05-27 03:56:00.451 [INFO][4455] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4d0e5db8fb92fb8dd388897bf0d71edce41b962351c39806e73dcfee715e3bf7" Namespace="calico-system" Pod="csi-node-driver-pc498" WorkloadEndpoint="172--237--145--45-k8s-csi--node--driver--pc498-eth0" May 27 03:56:00.478295 containerd[1570]: 2025-05-27 03:56:00.451 [INFO][4455] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4d0e5db8fb92fb8dd388897bf0d71edce41b962351c39806e73dcfee715e3bf7" Namespace="calico-system" Pod="csi-node-driver-pc498" WorkloadEndpoint="172--237--145--45-k8s-csi--node--driver--pc498-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--145--45-k8s-csi--node--driver--pc498-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"974f6c14-b459-4b2b-89e5-34bfda1490bf", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 55, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-145-45", ContainerID:"4d0e5db8fb92fb8dd388897bf0d71edce41b962351c39806e73dcfee715e3bf7", Pod:"csi-node-driver-pc498", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.75.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie646dbe29de", MAC:"f6:e2:da:40:f5:fc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:56:00.478295 containerd[1570]: 2025-05-27 03:56:00.468 [INFO][4455] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4d0e5db8fb92fb8dd388897bf0d71edce41b962351c39806e73dcfee715e3bf7" Namespace="calico-system" Pod="csi-node-driver-pc498" WorkloadEndpoint="172--237--145--45-k8s-csi--node--driver--pc498-eth0" May 27 03:56:00.491709 systemd[1]: Started cri-containerd-cf43563f2ab3bb932c7a0801f40bbd94437e76855994ef2e9862dfb5231a1059.scope - libcontainer container cf43563f2ab3bb932c7a0801f40bbd94437e76855994ef2e9862dfb5231a1059. May 27 03:56:00.499575 systemd[1]: Started cri-containerd-e55c5eacf43e1355c6d255c48a9463be33550a0b3c0342e1eb6dc041c0746d60.scope - libcontainer container e55c5eacf43e1355c6d255c48a9463be33550a0b3c0342e1eb6dc041c0746d60. May 27 03:56:00.517736 systemd[1]: Started cri-containerd-5023444947959b5de5dd0ca0ae99264be085bf32308f720868577a8196e75577.scope - libcontainer container 5023444947959b5de5dd0ca0ae99264be085bf32308f720868577a8196e75577. May 27 03:56:00.552451 containerd[1570]: time="2025-05-27T03:56:00.552064575Z" level=info msg="connecting to shim 4d0e5db8fb92fb8dd388897bf0d71edce41b962351c39806e73dcfee715e3bf7" address="unix:///run/containerd/s/dc94087ad872ceaa41a20c2747cc8a670edc08687133b5299c9de1335f813864" namespace=k8s.io protocol=ttrpc version=3 May 27 03:56:00.581345 containerd[1570]: time="2025-05-27T03:56:00.581296034Z" level=info msg="StartContainer for \"5023444947959b5de5dd0ca0ae99264be085bf32308f720868577a8196e75577\" returns successfully" May 27 03:56:00.597476 systemd[1]: Started cri-containerd-4d0e5db8fb92fb8dd388897bf0d71edce41b962351c39806e73dcfee715e3bf7.scope - libcontainer container 4d0e5db8fb92fb8dd388897bf0d71edce41b962351c39806e73dcfee715e3bf7. May 27 03:56:00.638354 systemd-networkd[1472]: caliab16a309a84: Gained IPv6LL May 27 03:56:00.721317 containerd[1570]: time="2025-05-27T03:56:00.720865911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pc498,Uid:974f6c14-b459-4b2b-89e5-34bfda1490bf,Namespace:calico-system,Attempt:0,} returns sandbox id \"4d0e5db8fb92fb8dd388897bf0d71edce41b962351c39806e73dcfee715e3bf7\"" May 27 03:56:00.725498 containerd[1570]: time="2025-05-27T03:56:00.725403211Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 27 03:56:00.733070 containerd[1570]: time="2025-05-27T03:56:00.733040949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68b896d46-g2wtg,Uid:5900b6a9-212e-4f83-ab2e-42e0e9d86137,Namespace:calico-system,Attempt:0,} returns sandbox id \"cf43563f2ab3bb932c7a0801f40bbd94437e76855994ef2e9862dfb5231a1059\"" May 27 03:56:00.736071 containerd[1570]: time="2025-05-27T03:56:00.736041576Z" level=info msg="StartContainer for \"e55c5eacf43e1355c6d255c48a9463be33550a0b3c0342e1eb6dc041c0746d60\" returns successfully" May 27 03:56:00.907000 kubelet[2692]: E0527 03:56:00.906882 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:56:00.914316 kubelet[2692]: E0527 03:56:00.914269 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:56:00.915630 kubelet[2692]: E0527 03:56:00.915560 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-n8xhd" podUID="305d83db-a282-47d8-aae7-29f765f39c27" May 27 03:56:00.932122 kubelet[2692]: I0527 03:56:00.931920 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f4f448c9d-6qqwn" podStartSLOduration=21.19196073 podStartE2EDuration="24.931180473s" podCreationTimestamp="2025-05-27 03:55:36 +0000 UTC" firstStartedPulling="2025-05-27 03:55:55.960080908 +0000 UTC m=+34.307899984" lastFinishedPulling="2025-05-27 03:55:59.699300651 +0000 UTC m=+38.047119727" observedRunningTime="2025-05-27 03:56:00.914431834 +0000 UTC m=+39.262250910" watchObservedRunningTime="2025-05-27 03:56:00.931180473 +0000 UTC m=+39.278999549" May 27 03:56:00.942219 kubelet[2692]: I0527 03:56:00.942140 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f4f448c9d-kp96l" podStartSLOduration=24.942130149 podStartE2EDuration="24.942130149s" podCreationTimestamp="2025-05-27 03:55:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:56:00.941083817 +0000 UTC m=+39.288902893" watchObservedRunningTime="2025-05-27 03:56:00.942130149 +0000 UTC m=+39.289949225" May 27 03:56:00.957391 kubelet[2692]: I0527 03:56:00.957341 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-xpzxg" podStartSLOduration=32.957331355 podStartE2EDuration="32.957331355s" podCreationTimestamp="2025-05-27 03:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:56:00.956485172 +0000 UTC m=+39.304304248" watchObservedRunningTime="2025-05-27 03:56:00.957331355 +0000 UTC m=+39.305150431" May 27 03:56:01.549760 containerd[1570]: time="2025-05-27T03:56:01.549709734Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:56:01.554018 containerd[1570]: time="2025-05-27T03:56:01.553561153Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8758390" May 27 03:56:01.554457 containerd[1570]: time="2025-05-27T03:56:01.554426135Z" level=info msg="ImageCreate event name:\"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:56:01.557269 containerd[1570]: time="2025-05-27T03:56:01.557240451Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:56:01.557757 containerd[1570]: time="2025-05-27T03:56:01.557728242Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"10251093\" in 832.297561ms" May 27 03:56:01.557850 containerd[1570]: time="2025-05-27T03:56:01.557756902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\"" May 27 03:56:01.561013 containerd[1570]: time="2025-05-27T03:56:01.560848489Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 27 03:56:01.565396 containerd[1570]: time="2025-05-27T03:56:01.565361709Z" level=info msg="CreateContainer within sandbox \"4d0e5db8fb92fb8dd388897bf0d71edce41b962351c39806e73dcfee715e3bf7\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 27 03:56:01.579348 containerd[1570]: time="2025-05-27T03:56:01.579308749Z" level=info msg="Container 94441885d9e322548cfd6b40da64fac2e629a5c79cc7d29ab6cc23cc23767867: CDI devices from CRI Config.CDIDevices: []" May 27 03:56:01.588445 containerd[1570]: time="2025-05-27T03:56:01.587492276Z" level=info msg="CreateContainer within sandbox \"4d0e5db8fb92fb8dd388897bf0d71edce41b962351c39806e73dcfee715e3bf7\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"94441885d9e322548cfd6b40da64fac2e629a5c79cc7d29ab6cc23cc23767867\"" May 27 03:56:01.588707 containerd[1570]: time="2025-05-27T03:56:01.588678899Z" level=info msg="StartContainer for \"94441885d9e322548cfd6b40da64fac2e629a5c79cc7d29ab6cc23cc23767867\"" May 27 03:56:01.592203 containerd[1570]: time="2025-05-27T03:56:01.592036096Z" level=info msg="connecting to shim 94441885d9e322548cfd6b40da64fac2e629a5c79cc7d29ab6cc23cc23767867" address="unix:///run/containerd/s/dc94087ad872ceaa41a20c2747cc8a670edc08687133b5299c9de1335f813864" protocol=ttrpc version=3 May 27 03:56:01.632299 systemd[1]: Started cri-containerd-94441885d9e322548cfd6b40da64fac2e629a5c79cc7d29ab6cc23cc23767867.scope - libcontainer container 94441885d9e322548cfd6b40da64fac2e629a5c79cc7d29ab6cc23cc23767867. May 27 03:56:01.725594 systemd-networkd[1472]: cali9f58cc585b5: Gained IPv6LL May 27 03:56:01.769724 containerd[1570]: time="2025-05-27T03:56:01.769549070Z" level=info msg="StartContainer for \"94441885d9e322548cfd6b40da64fac2e629a5c79cc7d29ab6cc23cc23767867\" returns successfully" May 27 03:56:01.853354 systemd-networkd[1472]: calie646dbe29de: Gained IPv6LL May 27 03:56:01.918049 kubelet[2692]: E0527 03:56:01.917985 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:56:01.918459 kubelet[2692]: I0527 03:56:01.918324 2692 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:56:01.918944 kubelet[2692]: I0527 03:56:01.918556 2692 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:56:02.045390 systemd-networkd[1472]: cali1ee76416581: Gained IPv6LL May 27 03:56:02.301437 systemd-networkd[1472]: cali7f3c5b12f40: Gained IPv6LL May 27 03:56:02.921699 kubelet[2692]: E0527 03:56:02.921532 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:56:03.212403 containerd[1570]: time="2025-05-27T03:56:03.212075439Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:56:03.213335 containerd[1570]: time="2025-05-27T03:56:03.212805750Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.0: active requests=0, bytes read=51178512" May 27 03:56:03.213991 containerd[1570]: time="2025-05-27T03:56:03.213768862Z" level=info msg="ImageCreate event name:\"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:56:03.215274 containerd[1570]: time="2025-05-27T03:56:03.215254624Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:56:03.215767 containerd[1570]: time="2025-05-27T03:56:03.215721465Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" with image id \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\", size \"52671183\" in 1.654844036s" May 27 03:56:03.215767 containerd[1570]: time="2025-05-27T03:56:03.215759075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\"" May 27 03:56:03.216763 containerd[1570]: time="2025-05-27T03:56:03.216729498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 03:56:03.234735 containerd[1570]: time="2025-05-27T03:56:03.234705191Z" level=info msg="CreateContainer within sandbox \"cf43563f2ab3bb932c7a0801f40bbd94437e76855994ef2e9862dfb5231a1059\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 27 03:56:03.241357 containerd[1570]: time="2025-05-27T03:56:03.241323342Z" level=info msg="Container 3655c033461662029d67b2b9a60a72ea8718e993779084ac26b54a5243af8b96: CDI devices from CRI Config.CDIDevices: []" May 27 03:56:03.250040 containerd[1570]: time="2025-05-27T03:56:03.249995099Z" level=info msg="CreateContainer within sandbox \"cf43563f2ab3bb932c7a0801f40bbd94437e76855994ef2e9862dfb5231a1059\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"3655c033461662029d67b2b9a60a72ea8718e993779084ac26b54a5243af8b96\"" May 27 03:56:03.251564 containerd[1570]: time="2025-05-27T03:56:03.250385600Z" level=info msg="StartContainer for \"3655c033461662029d67b2b9a60a72ea8718e993779084ac26b54a5243af8b96\"" May 27 03:56:03.252610 containerd[1570]: time="2025-05-27T03:56:03.252577744Z" level=info msg="connecting to shim 3655c033461662029d67b2b9a60a72ea8718e993779084ac26b54a5243af8b96" address="unix:///run/containerd/s/0ac63a0fb96a9a75f22ba84aff8f73ef383c69eda0080c9f8697331adb2ed24b" protocol=ttrpc version=3 May 27 03:56:03.278308 systemd[1]: Started cri-containerd-3655c033461662029d67b2b9a60a72ea8718e993779084ac26b54a5243af8b96.scope - libcontainer container 3655c033461662029d67b2b9a60a72ea8718e993779084ac26b54a5243af8b96. May 27 03:56:03.313373 containerd[1570]: time="2025-05-27T03:56:03.313303075Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:56:03.314695 containerd[1570]: time="2025-05-27T03:56:03.314657438Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:56:03.314832 containerd[1570]: time="2025-05-27T03:56:03.314759168Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 03:56:03.314975 kubelet[2692]: E0527 03:56:03.314934 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:56:03.315052 kubelet[2692]: E0527 03:56:03.314982 2692 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:56:03.315214 kubelet[2692]: E0527 03:56:03.315161 2692 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a533b5daea634b52a7b2eac27807f9ca,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dmhc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6cdf9fdb77-bk2d7_calico-system(dc5ba6bb-b92b-4490-90f8-193e957a2beb): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:56:03.316340 containerd[1570]: time="2025-05-27T03:56:03.316305141Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 27 03:56:03.345870 containerd[1570]: time="2025-05-27T03:56:03.345831265Z" level=info msg="StartContainer for \"3655c033461662029d67b2b9a60a72ea8718e993779084ac26b54a5243af8b96\" returns successfully" May 27 03:56:03.944263 kubelet[2692]: I0527 03:56:03.943906 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-68b896d46-g2wtg" podStartSLOduration=22.462487264 podStartE2EDuration="24.943888697s" podCreationTimestamp="2025-05-27 03:55:39 +0000 UTC" firstStartedPulling="2025-05-27 03:56:00.735238334 +0000 UTC m=+39.083057410" lastFinishedPulling="2025-05-27 03:56:03.216639767 +0000 UTC m=+41.564458843" observedRunningTime="2025-05-27 03:56:03.941666173 +0000 UTC m=+42.289485249" watchObservedRunningTime="2025-05-27 03:56:03.943888697 +0000 UTC m=+42.291707773" May 27 03:56:03.984471 containerd[1570]: time="2025-05-27T03:56:03.984428002Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3655c033461662029d67b2b9a60a72ea8718e993779084ac26b54a5243af8b96\" id:\"f0e4733960672c1ac016aac7106245e0fc409bc0319b1307b454b760ce0ca313\" pid:4953 exited_at:{seconds:1748318163 nanos:983818590}" May 27 03:56:04.545734 containerd[1570]: time="2025-05-27T03:56:04.545676486Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:56:04.547210 containerd[1570]: time="2025-05-27T03:56:04.546733037Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=14705639" May 27 03:56:04.547822 containerd[1570]: time="2025-05-27T03:56:04.547785950Z" level=info msg="ImageCreate event name:\"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:56:04.549241 containerd[1570]: time="2025-05-27T03:56:04.549180432Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:56:04.549836 containerd[1570]: time="2025-05-27T03:56:04.549795553Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"16198294\" in 1.233450952s" May 27 03:56:04.549881 containerd[1570]: time="2025-05-27T03:56:04.549835943Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\"" May 27 03:56:04.551452 containerd[1570]: time="2025-05-27T03:56:04.551430206Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 03:56:04.555132 containerd[1570]: time="2025-05-27T03:56:04.555100992Z" level=info msg="CreateContainer within sandbox \"4d0e5db8fb92fb8dd388897bf0d71edce41b962351c39806e73dcfee715e3bf7\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 27 03:56:04.568042 containerd[1570]: time="2025-05-27T03:56:04.565574410Z" level=info msg="Container a63ac54f05afbf06a7d5905fa7ae7dcd859595392b89651f4c4c1499676b832f: CDI devices from CRI Config.CDIDevices: []" May 27 03:56:04.570460 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3994810974.mount: Deactivated successfully. May 27 03:56:04.574865 containerd[1570]: time="2025-05-27T03:56:04.574820225Z" level=info msg="CreateContainer within sandbox \"4d0e5db8fb92fb8dd388897bf0d71edce41b962351c39806e73dcfee715e3bf7\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a63ac54f05afbf06a7d5905fa7ae7dcd859595392b89651f4c4c1499676b832f\"" May 27 03:56:04.575876 containerd[1570]: time="2025-05-27T03:56:04.575816437Z" level=info msg="StartContainer for \"a63ac54f05afbf06a7d5905fa7ae7dcd859595392b89651f4c4c1499676b832f\"" May 27 03:56:04.577830 containerd[1570]: time="2025-05-27T03:56:04.577346449Z" level=info msg="connecting to shim a63ac54f05afbf06a7d5905fa7ae7dcd859595392b89651f4c4c1499676b832f" address="unix:///run/containerd/s/dc94087ad872ceaa41a20c2747cc8a670edc08687133b5299c9de1335f813864" protocol=ttrpc version=3 May 27 03:56:04.609322 systemd[1]: Started cri-containerd-a63ac54f05afbf06a7d5905fa7ae7dcd859595392b89651f4c4c1499676b832f.scope - libcontainer container a63ac54f05afbf06a7d5905fa7ae7dcd859595392b89651f4c4c1499676b832f. May 27 03:56:04.651983 containerd[1570]: time="2025-05-27T03:56:04.651948316Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:56:04.652064 containerd[1570]: time="2025-05-27T03:56:04.652033366Z" level=info msg="StartContainer for \"a63ac54f05afbf06a7d5905fa7ae7dcd859595392b89651f4c4c1499676b832f\" returns successfully" May 27 03:56:04.653513 containerd[1570]: time="2025-05-27T03:56:04.653166688Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:56:04.653766 containerd[1570]: time="2025-05-27T03:56:04.653679619Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 03:56:04.654574 kubelet[2692]: E0527 03:56:04.654412 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:56:04.654574 kubelet[2692]: E0527 03:56:04.654569 2692 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:56:04.655403 kubelet[2692]: E0527 03:56:04.655246 2692 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dmhc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6cdf9fdb77-bk2d7_calico-system(dc5ba6bb-b92b-4490-90f8-193e957a2beb): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:56:04.656535 kubelet[2692]: E0527 03:56:04.656489 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6cdf9fdb77-bk2d7" podUID="dc5ba6bb-b92b-4490-90f8-193e957a2beb" May 27 03:56:04.829114 kubelet[2692]: I0527 03:56:04.828528 2692 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 27 03:56:04.830065 kubelet[2692]: I0527 03:56:04.830043 2692 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 27 03:56:04.939947 kubelet[2692]: I0527 03:56:04.939740 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-pc498" podStartSLOduration=22.11379552 podStartE2EDuration="25.939724684s" podCreationTimestamp="2025-05-27 03:55:39 +0000 UTC" firstStartedPulling="2025-05-27 03:56:00.72483961 +0000 UTC m=+39.072658686" lastFinishedPulling="2025-05-27 03:56:04.550768774 +0000 UTC m=+42.898587850" observedRunningTime="2025-05-27 03:56:04.93751325 +0000 UTC m=+43.285332326" watchObservedRunningTime="2025-05-27 03:56:04.939724684 +0000 UTC m=+43.287543760" May 27 03:56:05.528789 kubelet[2692]: I0527 03:56:05.528432 2692 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:56:09.183601 kubelet[2692]: I0527 03:56:09.183542 2692 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:56:09.316816 containerd[1570]: time="2025-05-27T03:56:09.316774278Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9e73ba597a7d1bf517c7e6779b497bc7933bb786763613c9a7908b27f2aa4613\" id:\"75da32e62a4fd23a2ddbdc563294581d2d089e89ed68e8f6df6c9e43bf4b4b5e\" pid:5022 exited_at:{seconds:1748318169 nanos:316472797}" May 27 03:56:09.402445 containerd[1570]: time="2025-05-27T03:56:09.402387731Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9e73ba597a7d1bf517c7e6779b497bc7933bb786763613c9a7908b27f2aa4613\" id:\"caf2a6f48aa5d9b4564497dcc191f7f9c3c020fa5d8c15ca839cf52cfe74f748\" pid:5048 exited_at:{seconds:1748318169 nanos:401798360}" May 27 03:56:14.732634 kubelet[2692]: I0527 03:56:14.732565 2692 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:56:14.752380 containerd[1570]: time="2025-05-27T03:56:14.751989860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 03:56:14.866474 containerd[1570]: time="2025-05-27T03:56:14.866326314Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:56:14.867353 containerd[1570]: time="2025-05-27T03:56:14.867248445Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:56:14.867353 containerd[1570]: time="2025-05-27T03:56:14.867329735Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 03:56:14.867740 kubelet[2692]: E0527 03:56:14.867610 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:56:14.868202 kubelet[2692]: E0527 03:56:14.867853 2692 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:56:14.868711 kubelet[2692]: E0527 03:56:14.868665 2692 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8hxgn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-n8xhd_calico-system(305d83db-a282-47d8-aae7-29f765f39c27): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:56:14.870959 kubelet[2692]: E0527 03:56:14.869878 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-n8xhd" podUID="305d83db-a282-47d8-aae7-29f765f39c27" May 27 03:56:17.757291 kubelet[2692]: E0527 03:56:17.757242 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6cdf9fdb77-bk2d7" podUID="dc5ba6bb-b92b-4490-90f8-193e957a2beb" May 27 03:56:28.753259 kubelet[2692]: E0527 03:56:28.752953 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-n8xhd" podUID="305d83db-a282-47d8-aae7-29f765f39c27" May 27 03:56:28.753789 containerd[1570]: time="2025-05-27T03:56:28.753505426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 03:56:28.854128 containerd[1570]: time="2025-05-27T03:56:28.854079794Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:56:28.855782 containerd[1570]: time="2025-05-27T03:56:28.855700290Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:56:28.855875 containerd[1570]: time="2025-05-27T03:56:28.855799201Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 03:56:28.856066 kubelet[2692]: E0527 03:56:28.856026 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:56:28.856147 kubelet[2692]: E0527 03:56:28.856075 2692 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:56:28.857252 kubelet[2692]: E0527 03:56:28.856179 2692 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a533b5daea634b52a7b2eac27807f9ca,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dmhc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6cdf9fdb77-bk2d7_calico-system(dc5ba6bb-b92b-4490-90f8-193e957a2beb): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:56:28.859161 containerd[1570]: time="2025-05-27T03:56:28.859137054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 03:56:28.957442 containerd[1570]: time="2025-05-27T03:56:28.957396753Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:56:28.958334 containerd[1570]: time="2025-05-27T03:56:28.958250196Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:56:28.958334 containerd[1570]: time="2025-05-27T03:56:28.958303567Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 03:56:28.958634 kubelet[2692]: E0527 03:56:28.958594 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:56:28.958712 kubelet[2692]: E0527 03:56:28.958647 2692 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:56:28.958966 kubelet[2692]: E0527 03:56:28.958924 2692 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dmhc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6cdf9fdb77-bk2d7_calico-system(dc5ba6bb-b92b-4490-90f8-193e957a2beb): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:56:28.960355 kubelet[2692]: E0527 03:56:28.960313 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6cdf9fdb77-bk2d7" podUID="dc5ba6bb-b92b-4490-90f8-193e957a2beb" May 27 03:56:32.750503 kubelet[2692]: E0527 03:56:32.750439 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:56:34.063222 containerd[1570]: time="2025-05-27T03:56:34.063158361Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3655c033461662029d67b2b9a60a72ea8718e993779084ac26b54a5243af8b96\" id:\"d1ff5a2f29879fbec33981f4280390db75d5ee3217a822451eb908f16b3a9af6\" pid:5092 exited_at:{seconds:1748318194 nanos:62887612}" May 27 03:56:39.399657 containerd[1570]: time="2025-05-27T03:56:39.399418971Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9e73ba597a7d1bf517c7e6779b497bc7933bb786763613c9a7908b27f2aa4613\" id:\"310773ff9e9b489758e83c58160c87bb6bdfcce914eb8145520f6737c87d6639\" pid:5120 exited_at:{seconds:1748318199 nanos:398327582}" May 27 03:56:41.753647 containerd[1570]: time="2025-05-27T03:56:41.753378189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 03:56:41.853605 containerd[1570]: time="2025-05-27T03:56:41.853424464Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:56:41.856281 containerd[1570]: time="2025-05-27T03:56:41.854744018Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:56:41.856528 containerd[1570]: time="2025-05-27T03:56:41.856413621Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 03:56:41.857122 kubelet[2692]: E0527 03:56:41.856661 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:56:41.857122 kubelet[2692]: E0527 03:56:41.856734 2692 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:56:41.857122 kubelet[2692]: E0527 03:56:41.856856 2692 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8hxgn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-n8xhd_calico-system(305d83db-a282-47d8-aae7-29f765f39c27): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:56:41.858755 kubelet[2692]: E0527 03:56:41.858720 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-n8xhd" podUID="305d83db-a282-47d8-aae7-29f765f39c27" May 27 03:56:42.751224 kubelet[2692]: E0527 03:56:42.750976 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:56:42.752228 kubelet[2692]: E0527 03:56:42.751466 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6cdf9fdb77-bk2d7" podUID="dc5ba6bb-b92b-4490-90f8-193e957a2beb" May 27 03:56:48.749814 kubelet[2692]: E0527 03:56:48.749772 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:56:52.750871 kubelet[2692]: E0527 03:56:52.750812 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-n8xhd" podUID="305d83db-a282-47d8-aae7-29f765f39c27" May 27 03:56:53.309156 containerd[1570]: time="2025-05-27T03:56:53.309036569Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3655c033461662029d67b2b9a60a72ea8718e993779084ac26b54a5243af8b96\" id:\"a67a82c63a2bc7f933b43420ace6cf113a2669540f2e7d099c5a13f325c26a9e\" pid:5149 exited_at:{seconds:1748318213 nanos:308795624}" May 27 03:56:53.751209 kubelet[2692]: E0527 03:56:53.750296 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:56:55.751846 kubelet[2692]: E0527 03:56:55.751752 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6cdf9fdb77-bk2d7" podUID="dc5ba6bb-b92b-4490-90f8-193e957a2beb" May 27 03:56:59.753428 kubelet[2692]: E0527 03:56:59.753394 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:57:03.968289 containerd[1570]: time="2025-05-27T03:57:03.968159140Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3655c033461662029d67b2b9a60a72ea8718e993779084ac26b54a5243af8b96\" id:\"9927254fe9669e8a7321734f6a217764f4afb8ee55d9e266578e9ba855f1826e\" pid:5173 exited_at:{seconds:1748318223 nanos:967995797}" May 27 03:57:04.750481 kubelet[2692]: E0527 03:57:04.750436 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-n8xhd" podUID="305d83db-a282-47d8-aae7-29f765f39c27" May 27 03:57:08.750632 kubelet[2692]: E0527 03:57:08.750564 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:57:08.753140 kubelet[2692]: E0527 03:57:08.752215 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6cdf9fdb77-bk2d7" podUID="dc5ba6bb-b92b-4490-90f8-193e957a2beb" May 27 03:57:09.388208 containerd[1570]: time="2025-05-27T03:57:09.388093478Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9e73ba597a7d1bf517c7e6779b497bc7933bb786763613c9a7908b27f2aa4613\" id:\"dff913f3b65567841399e1b7b483030ea46e360300bea7ef8d3660db68013186\" pid:5195 exited_at:{seconds:1748318229 nanos:387035685}" May 27 03:57:15.750245 kubelet[2692]: E0527 03:57:15.749824 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:57:19.750201 kubelet[2692]: E0527 03:57:19.750136 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-n8xhd" podUID="305d83db-a282-47d8-aae7-29f765f39c27" May 27 03:57:20.755682 containerd[1570]: time="2025-05-27T03:57:20.755444117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 03:57:20.858898 containerd[1570]: time="2025-05-27T03:57:20.858835278Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:57:20.859672 containerd[1570]: time="2025-05-27T03:57:20.859641855Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:57:20.859837 containerd[1570]: time="2025-05-27T03:57:20.859718896Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 03:57:20.859904 kubelet[2692]: E0527 03:57:20.859831 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:57:20.859904 kubelet[2692]: E0527 03:57:20.859868 2692 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:57:20.861206 kubelet[2692]: E0527 03:57:20.859974 2692 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a533b5daea634b52a7b2eac27807f9ca,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dmhc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6cdf9fdb77-bk2d7_calico-system(dc5ba6bb-b92b-4490-90f8-193e957a2beb): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:57:20.863211 containerd[1570]: time="2025-05-27T03:57:20.862725277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 03:57:20.961361 containerd[1570]: time="2025-05-27T03:57:20.961303929Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:57:20.962410 containerd[1570]: time="2025-05-27T03:57:20.962376020Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:57:20.962492 containerd[1570]: time="2025-05-27T03:57:20.962455851Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 03:57:20.962665 kubelet[2692]: E0527 03:57:20.962597 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:57:20.962721 kubelet[2692]: E0527 03:57:20.962674 2692 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:57:20.962991 kubelet[2692]: E0527 03:57:20.962830 2692 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dmhc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6cdf9fdb77-bk2d7_calico-system(dc5ba6bb-b92b-4490-90f8-193e957a2beb): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:57:20.965077 kubelet[2692]: E0527 03:57:20.965016 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6cdf9fdb77-bk2d7" podUID="dc5ba6bb-b92b-4490-90f8-193e957a2beb" May 27 03:57:33.751235 containerd[1570]: time="2025-05-27T03:57:33.750777625Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 03:57:33.848055 containerd[1570]: time="2025-05-27T03:57:33.847989381Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:57:33.848756 containerd[1570]: time="2025-05-27T03:57:33.848721846Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:57:33.848909 containerd[1570]: time="2025-05-27T03:57:33.848793007Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 03:57:33.849011 kubelet[2692]: E0527 03:57:33.848939 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:57:33.849428 kubelet[2692]: E0527 03:57:33.849018 2692 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:57:33.849428 kubelet[2692]: E0527 03:57:33.849210 2692 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8hxgn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-n8xhd_calico-system(305d83db-a282-47d8-aae7-29f765f39c27): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:57:33.850741 kubelet[2692]: E0527 03:57:33.850679 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-n8xhd" podUID="305d83db-a282-47d8-aae7-29f765f39c27" May 27 03:57:33.969423 containerd[1570]: time="2025-05-27T03:57:33.969374477Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3655c033461662029d67b2b9a60a72ea8718e993779084ac26b54a5243af8b96\" id:\"144930c9f08ab2404a21b05342ce11aa8189493754934bd4f78b2b3dc4fa367c\" pid:5251 exited_at:{seconds:1748318253 nanos:968864513}" May 27 03:57:35.753750 kubelet[2692]: E0527 03:57:35.753607 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6cdf9fdb77-bk2d7" podUID="dc5ba6bb-b92b-4490-90f8-193e957a2beb" May 27 03:57:37.756956 systemd[1]: Started sshd@7-172.237.145.45:22-139.178.68.195:36500.service - OpenSSH per-connection server daemon (139.178.68.195:36500). May 27 03:57:38.103973 sshd[5264]: Accepted publickey for core from 139.178.68.195 port 36500 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:57:38.105359 sshd-session[5264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:57:38.111575 systemd-logind[1546]: New session 8 of user core. May 27 03:57:38.117438 systemd[1]: Started session-8.scope - Session 8 of User core. May 27 03:57:38.430844 sshd[5266]: Connection closed by 139.178.68.195 port 36500 May 27 03:57:38.431647 sshd-session[5264]: pam_unix(sshd:session): session closed for user core May 27 03:57:38.437518 systemd-logind[1546]: Session 8 logged out. Waiting for processes to exit. May 27 03:57:38.437981 systemd[1]: sshd@7-172.237.145.45:22-139.178.68.195:36500.service: Deactivated successfully. May 27 03:57:38.440787 systemd[1]: session-8.scope: Deactivated successfully. May 27 03:57:38.443616 systemd-logind[1546]: Removed session 8. May 27 03:57:38.750513 kubelet[2692]: E0527 03:57:38.750478 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:57:39.476769 containerd[1570]: time="2025-05-27T03:57:39.476710391Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9e73ba597a7d1bf517c7e6779b497bc7933bb786763613c9a7908b27f2aa4613\" id:\"c949560cb0c3b102815ac2c9bf509e9ee8462348491c8e2991aafd55865bfd60\" pid:5290 exited_at:{seconds:1748318259 nanos:476296128}" May 27 03:57:43.497389 systemd[1]: Started sshd@8-172.237.145.45:22-139.178.68.195:36510.service - OpenSSH per-connection server daemon (139.178.68.195:36510). May 27 03:57:43.839218 sshd[5302]: Accepted publickey for core from 139.178.68.195 port 36510 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:57:43.840406 sshd-session[5302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:57:43.846424 systemd-logind[1546]: New session 9 of user core. May 27 03:57:43.853896 systemd[1]: Started session-9.scope - Session 9 of User core. May 27 03:57:44.169469 sshd[5304]: Connection closed by 139.178.68.195 port 36510 May 27 03:57:44.170876 sshd-session[5302]: pam_unix(sshd:session): session closed for user core May 27 03:57:44.175536 systemd-logind[1546]: Session 9 logged out. Waiting for processes to exit. May 27 03:57:44.177456 systemd[1]: sshd@8-172.237.145.45:22-139.178.68.195:36510.service: Deactivated successfully. May 27 03:57:44.181491 systemd[1]: session-9.scope: Deactivated successfully. May 27 03:57:44.183954 systemd-logind[1546]: Removed session 9. May 27 03:57:46.751426 kubelet[2692]: E0527 03:57:46.751348 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-n8xhd" podUID="305d83db-a282-47d8-aae7-29f765f39c27" May 27 03:57:46.752753 kubelet[2692]: E0527 03:57:46.752493 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6cdf9fdb77-bk2d7" podUID="dc5ba6bb-b92b-4490-90f8-193e957a2beb" May 27 03:57:49.230531 systemd[1]: Started sshd@9-172.237.145.45:22-139.178.68.195:54616.service - OpenSSH per-connection server daemon (139.178.68.195:54616). May 27 03:57:49.577482 sshd[5317]: Accepted publickey for core from 139.178.68.195 port 54616 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:57:49.578882 sshd-session[5317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:57:49.585509 systemd-logind[1546]: New session 10 of user core. May 27 03:57:49.596510 systemd[1]: Started session-10.scope - Session 10 of User core. May 27 03:57:49.898902 sshd[5319]: Connection closed by 139.178.68.195 port 54616 May 27 03:57:49.899390 sshd-session[5317]: pam_unix(sshd:session): session closed for user core May 27 03:57:49.904964 systemd-logind[1546]: Session 10 logged out. Waiting for processes to exit. May 27 03:57:49.905792 systemd[1]: sshd@9-172.237.145.45:22-139.178.68.195:54616.service: Deactivated successfully. May 27 03:57:49.908070 systemd[1]: session-10.scope: Deactivated successfully. May 27 03:57:49.910870 systemd-logind[1546]: Removed session 10. May 27 03:57:49.958990 systemd[1]: Started sshd@10-172.237.145.45:22-139.178.68.195:54630.service - OpenSSH per-connection server daemon (139.178.68.195:54630). May 27 03:57:50.293355 sshd[5336]: Accepted publickey for core from 139.178.68.195 port 54630 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:57:50.294852 sshd-session[5336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:57:50.301550 systemd-logind[1546]: New session 11 of user core. May 27 03:57:50.306426 systemd[1]: Started session-11.scope - Session 11 of User core. May 27 03:57:50.636102 sshd[5338]: Connection closed by 139.178.68.195 port 54630 May 27 03:57:50.637591 sshd-session[5336]: pam_unix(sshd:session): session closed for user core May 27 03:57:50.642861 systemd-logind[1546]: Session 11 logged out. Waiting for processes to exit. May 27 03:57:50.644012 systemd[1]: sshd@10-172.237.145.45:22-139.178.68.195:54630.service: Deactivated successfully. May 27 03:57:50.647445 systemd[1]: session-11.scope: Deactivated successfully. May 27 03:57:50.649625 systemd-logind[1546]: Removed session 11. May 27 03:57:50.702679 systemd[1]: Started sshd@11-172.237.145.45:22-139.178.68.195:54638.service - OpenSSH per-connection server daemon (139.178.68.195:54638). May 27 03:57:51.047867 sshd[5348]: Accepted publickey for core from 139.178.68.195 port 54638 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:57:51.050587 sshd-session[5348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:57:51.057079 systemd-logind[1546]: New session 12 of user core. May 27 03:57:51.062436 systemd[1]: Started session-12.scope - Session 12 of User core. May 27 03:57:51.361576 sshd[5350]: Connection closed by 139.178.68.195 port 54638 May 27 03:57:51.362367 sshd-session[5348]: pam_unix(sshd:session): session closed for user core May 27 03:57:51.367163 systemd[1]: sshd@11-172.237.145.45:22-139.178.68.195:54638.service: Deactivated successfully. May 27 03:57:51.369389 systemd[1]: session-12.scope: Deactivated successfully. May 27 03:57:51.370471 systemd-logind[1546]: Session 12 logged out. Waiting for processes to exit. May 27 03:57:51.372281 systemd-logind[1546]: Removed session 12. May 27 03:57:53.320852 containerd[1570]: time="2025-05-27T03:57:53.320813065Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3655c033461662029d67b2b9a60a72ea8718e993779084ac26b54a5243af8b96\" id:\"a6cfb68dd090d37758beaa2af7baba377fb68c084a38d3d7b06b524d1370bd22\" pid:5373 exited_at:{seconds:1748318273 nanos:320081550}" May 27 03:57:56.426748 systemd[1]: Started sshd@12-172.237.145.45:22-139.178.68.195:42998.service - OpenSSH per-connection server daemon (139.178.68.195:42998). May 27 03:57:56.765929 sshd[5383]: Accepted publickey for core from 139.178.68.195 port 42998 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:57:56.767667 sshd-session[5383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:57:56.772642 systemd-logind[1546]: New session 13 of user core. May 27 03:57:56.775387 systemd[1]: Started session-13.scope - Session 13 of User core. May 27 03:57:57.078796 sshd[5385]: Connection closed by 139.178.68.195 port 42998 May 27 03:57:57.079628 sshd-session[5383]: pam_unix(sshd:session): session closed for user core May 27 03:57:57.083804 systemd[1]: sshd@12-172.237.145.45:22-139.178.68.195:42998.service: Deactivated successfully. May 27 03:57:57.087286 systemd[1]: session-13.scope: Deactivated successfully. May 27 03:57:57.088458 systemd-logind[1546]: Session 13 logged out. Waiting for processes to exit. May 27 03:57:57.090346 systemd-logind[1546]: Removed session 13. May 27 03:57:57.137077 systemd[1]: Started sshd@13-172.237.145.45:22-139.178.68.195:43004.service - OpenSSH per-connection server daemon (139.178.68.195:43004). May 27 03:57:57.467234 sshd[5397]: Accepted publickey for core from 139.178.68.195 port 43004 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:57:57.469508 sshd-session[5397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:57:57.475860 systemd-logind[1546]: New session 14 of user core. May 27 03:57:57.481472 systemd[1]: Started session-14.scope - Session 14 of User core. May 27 03:57:57.754270 kubelet[2692]: E0527 03:57:57.754223 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-n8xhd" podUID="305d83db-a282-47d8-aae7-29f765f39c27" May 27 03:57:57.899270 sshd[5399]: Connection closed by 139.178.68.195 port 43004 May 27 03:57:57.899982 sshd-session[5397]: pam_unix(sshd:session): session closed for user core May 27 03:57:57.905144 systemd-logind[1546]: Session 14 logged out. Waiting for processes to exit. May 27 03:57:57.906226 systemd[1]: sshd@13-172.237.145.45:22-139.178.68.195:43004.service: Deactivated successfully. May 27 03:57:57.909961 systemd[1]: session-14.scope: Deactivated successfully. May 27 03:57:57.911770 systemd-logind[1546]: Removed session 14. May 27 03:57:57.961819 systemd[1]: Started sshd@14-172.237.145.45:22-139.178.68.195:43006.service - OpenSSH per-connection server daemon (139.178.68.195:43006). May 27 03:57:58.300531 sshd[5409]: Accepted publickey for core from 139.178.68.195 port 43006 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:57:58.302072 sshd-session[5409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:57:58.308180 systemd-logind[1546]: New session 15 of user core. May 27 03:57:58.315503 systemd[1]: Started session-15.scope - Session 15 of User core. May 27 03:57:58.751228 kubelet[2692]: E0527 03:57:58.750784 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6cdf9fdb77-bk2d7" podUID="dc5ba6bb-b92b-4490-90f8-193e957a2beb" May 27 03:57:59.211555 sshd[5411]: Connection closed by 139.178.68.195 port 43006 May 27 03:57:59.212776 sshd-session[5409]: pam_unix(sshd:session): session closed for user core May 27 03:57:59.218204 systemd-logind[1546]: Session 15 logged out. Waiting for processes to exit. May 27 03:57:59.219264 systemd[1]: sshd@14-172.237.145.45:22-139.178.68.195:43006.service: Deactivated successfully. May 27 03:57:59.222589 systemd[1]: session-15.scope: Deactivated successfully. May 27 03:57:59.225409 systemd-logind[1546]: Removed session 15. May 27 03:57:59.279458 systemd[1]: Started sshd@15-172.237.145.45:22-139.178.68.195:43012.service - OpenSSH per-connection server daemon (139.178.68.195:43012). May 27 03:57:59.626142 sshd[5431]: Accepted publickey for core from 139.178.68.195 port 43012 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:57:59.628018 sshd-session[5431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:57:59.633410 systemd-logind[1546]: New session 16 of user core. May 27 03:57:59.641314 systemd[1]: Started session-16.scope - Session 16 of User core. May 27 03:58:00.026688 sshd[5433]: Connection closed by 139.178.68.195 port 43012 May 27 03:58:00.028357 sshd-session[5431]: pam_unix(sshd:session): session closed for user core May 27 03:58:00.032505 systemd[1]: sshd@15-172.237.145.45:22-139.178.68.195:43012.service: Deactivated successfully. May 27 03:58:00.034852 systemd[1]: session-16.scope: Deactivated successfully. May 27 03:58:00.036337 systemd-logind[1546]: Session 16 logged out. Waiting for processes to exit. May 27 03:58:00.037777 systemd-logind[1546]: Removed session 16. May 27 03:58:00.092982 systemd[1]: Started sshd@16-172.237.145.45:22-139.178.68.195:43020.service - OpenSSH per-connection server daemon (139.178.68.195:43020). May 27 03:58:00.426957 sshd[5443]: Accepted publickey for core from 139.178.68.195 port 43020 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:58:00.428299 sshd-session[5443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:58:00.434653 systemd-logind[1546]: New session 17 of user core. May 27 03:58:00.439324 systemd[1]: Started session-17.scope - Session 17 of User core. May 27 03:58:00.721764 sshd[5445]: Connection closed by 139.178.68.195 port 43020 May 27 03:58:00.722515 sshd-session[5443]: pam_unix(sshd:session): session closed for user core May 27 03:58:00.727264 systemd-logind[1546]: Session 17 logged out. Waiting for processes to exit. May 27 03:58:00.728165 systemd[1]: sshd@16-172.237.145.45:22-139.178.68.195:43020.service: Deactivated successfully. May 27 03:58:00.730708 systemd[1]: session-17.scope: Deactivated successfully. May 27 03:58:00.732155 systemd-logind[1546]: Removed session 17. May 27 03:58:01.750039 kubelet[2692]: E0527 03:58:01.749902 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:58:02.749876 kubelet[2692]: E0527 03:58:02.749843 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:58:03.967242 containerd[1570]: time="2025-05-27T03:58:03.967201609Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3655c033461662029d67b2b9a60a72ea8718e993779084ac26b54a5243af8b96\" id:\"ba3f47ffb4bb9780c9d4e54fefe6a6c7f440a14bcd2a769f3033f4595cdf8e02\" pid:5468 exited_at:{seconds:1748318283 nanos:966997078}" May 27 03:58:05.782651 systemd[1]: Started sshd@17-172.237.145.45:22-139.178.68.195:55170.service - OpenSSH per-connection server daemon (139.178.68.195:55170). May 27 03:58:06.119761 sshd[5478]: Accepted publickey for core from 139.178.68.195 port 55170 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:58:06.121056 sshd-session[5478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:58:06.125253 systemd-logind[1546]: New session 18 of user core. May 27 03:58:06.133309 systemd[1]: Started session-18.scope - Session 18 of User core. May 27 03:58:06.420355 sshd[5480]: Connection closed by 139.178.68.195 port 55170 May 27 03:58:06.421024 sshd-session[5478]: pam_unix(sshd:session): session closed for user core May 27 03:58:06.424278 systemd[1]: sshd@17-172.237.145.45:22-139.178.68.195:55170.service: Deactivated successfully. May 27 03:58:06.426652 systemd[1]: session-18.scope: Deactivated successfully. May 27 03:58:06.428985 systemd-logind[1546]: Session 18 logged out. Waiting for processes to exit. May 27 03:58:06.430913 systemd-logind[1546]: Removed session 18. May 27 03:58:07.750861 kubelet[2692]: E0527 03:58:07.749896 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:58:09.396071 containerd[1570]: time="2025-05-27T03:58:09.396027221Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9e73ba597a7d1bf517c7e6779b497bc7933bb786763613c9a7908b27f2aa4613\" id:\"f473814fd3486eb8e0d5b6cb1f57cc0918ebb3451e903f70777c24cda5c25c30\" pid:5502 exited_at:{seconds:1748318289 nanos:395736699}" May 27 03:58:10.751852 kubelet[2692]: E0527 03:58:10.751590 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-n8xhd" podUID="305d83db-a282-47d8-aae7-29f765f39c27" May 27 03:58:11.486385 systemd[1]: Started sshd@18-172.237.145.45:22-139.178.68.195:55176.service - OpenSSH per-connection server daemon (139.178.68.195:55176). May 27 03:58:11.825635 sshd[5514]: Accepted publickey for core from 139.178.68.195 port 55176 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:58:11.828821 sshd-session[5514]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:58:11.838901 systemd-logind[1546]: New session 19 of user core. May 27 03:58:11.843454 systemd[1]: Started session-19.scope - Session 19 of User core. May 27 03:58:12.147215 sshd[5516]: Connection closed by 139.178.68.195 port 55176 May 27 03:58:12.146568 sshd-session[5514]: pam_unix(sshd:session): session closed for user core May 27 03:58:12.150663 systemd-logind[1546]: Session 19 logged out. Waiting for processes to exit. May 27 03:58:12.153320 systemd[1]: sshd@18-172.237.145.45:22-139.178.68.195:55176.service: Deactivated successfully. May 27 03:58:12.156368 systemd[1]: session-19.scope: Deactivated successfully. May 27 03:58:12.159501 systemd-logind[1546]: Removed session 19. May 27 03:58:13.751460 kubelet[2692]: E0527 03:58:13.751223 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6cdf9fdb77-bk2d7" podUID="dc5ba6bb-b92b-4490-90f8-193e957a2beb" May 27 03:58:14.750085 kubelet[2692]: E0527 03:58:14.750056 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:58:17.204573 systemd[1]: Started sshd@19-172.237.145.45:22-139.178.68.195:55044.service - OpenSSH per-connection server daemon (139.178.68.195:55044). May 27 03:58:17.545407 sshd[5528]: Accepted publickey for core from 139.178.68.195 port 55044 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:58:17.546841 sshd-session[5528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:58:17.551392 systemd-logind[1546]: New session 20 of user core. May 27 03:58:17.556320 systemd[1]: Started session-20.scope - Session 20 of User core. May 27 03:58:17.849847 sshd[5530]: Connection closed by 139.178.68.195 port 55044 May 27 03:58:17.850662 sshd-session[5528]: pam_unix(sshd:session): session closed for user core May 27 03:58:17.855157 systemd-logind[1546]: Session 20 logged out. Waiting for processes to exit. May 27 03:58:17.855966 systemd[1]: sshd@19-172.237.145.45:22-139.178.68.195:55044.service: Deactivated successfully. May 27 03:58:17.858502 systemd[1]: session-20.scope: Deactivated successfully. May 27 03:58:17.860287 systemd-logind[1546]: Removed session 20. May 27 03:58:21.750480 kubelet[2692]: E0527 03:58:21.750110 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:58:22.910684 systemd[1]: Started sshd@20-172.237.145.45:22-139.178.68.195:55056.service - OpenSSH per-connection server daemon (139.178.68.195:55056). May 27 03:58:23.253973 sshd[5544]: Accepted publickey for core from 139.178.68.195 port 55056 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:58:23.256091 sshd-session[5544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:58:23.263273 systemd-logind[1546]: New session 21 of user core. May 27 03:58:23.267317 systemd[1]: Started session-21.scope - Session 21 of User core. May 27 03:58:23.561458 sshd[5546]: Connection closed by 139.178.68.195 port 55056 May 27 03:58:23.562342 sshd-session[5544]: pam_unix(sshd:session): session closed for user core May 27 03:58:23.567746 systemd[1]: sshd@20-172.237.145.45:22-139.178.68.195:55056.service: Deactivated successfully. May 27 03:58:23.571168 systemd[1]: session-21.scope: Deactivated successfully. May 27 03:58:23.572346 systemd-logind[1546]: Session 21 logged out. Waiting for processes to exit. May 27 03:58:23.574243 systemd-logind[1546]: Removed session 21. May 27 03:58:23.751337 kubelet[2692]: E0527 03:58:23.751063 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-n8xhd" podUID="305d83db-a282-47d8-aae7-29f765f39c27" May 27 03:58:26.751960 kubelet[2692]: E0527 03:58:26.751600 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6cdf9fdb77-bk2d7" podUID="dc5ba6bb-b92b-4490-90f8-193e957a2beb" May 27 03:58:28.624944 systemd[1]: Started sshd@21-172.237.145.45:22-139.178.68.195:44418.service - OpenSSH per-connection server daemon (139.178.68.195:44418). May 27 03:58:28.967414 sshd[5560]: Accepted publickey for core from 139.178.68.195 port 44418 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:58:28.969475 sshd-session[5560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:58:28.976396 systemd-logind[1546]: New session 22 of user core. May 27 03:58:28.981544 systemd[1]: Started session-22.scope - Session 22 of User core. May 27 03:58:29.282567 sshd[5562]: Connection closed by 139.178.68.195 port 44418 May 27 03:58:29.281538 sshd-session[5560]: pam_unix(sshd:session): session closed for user core May 27 03:58:29.286396 systemd-logind[1546]: Session 22 logged out. Waiting for processes to exit. May 27 03:58:29.287053 systemd[1]: sshd@21-172.237.145.45:22-139.178.68.195:44418.service: Deactivated successfully. May 27 03:58:29.291674 systemd[1]: session-22.scope: Deactivated successfully. May 27 03:58:29.295183 systemd-logind[1546]: Removed session 22. May 27 03:58:30.750016 kubelet[2692]: E0527 03:58:30.749978 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:58:33.966589 containerd[1570]: time="2025-05-27T03:58:33.966555911Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3655c033461662029d67b2b9a60a72ea8718e993779084ac26b54a5243af8b96\" id:\"82e50f5a02fb669cfa8c8fbab084ea03b8d4cd4f34094c8d9475ad3282b720f1\" pid:5585 exited_at:{seconds:1748318313 nanos:966337320}" May 27 03:58:34.348476 systemd[1]: Started sshd@22-172.237.145.45:22-139.178.68.195:36508.service - OpenSSH per-connection server daemon (139.178.68.195:36508). May 27 03:58:34.698159 sshd[5596]: Accepted publickey for core from 139.178.68.195 port 36508 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:58:34.699629 sshd-session[5596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:58:34.707489 systemd-logind[1546]: New session 23 of user core. May 27 03:58:34.714324 systemd[1]: Started session-23.scope - Session 23 of User core. May 27 03:58:34.750935 kubelet[2692]: E0527 03:58:34.750880 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-n8xhd" podUID="305d83db-a282-47d8-aae7-29f765f39c27" May 27 03:58:35.001012 sshd[5598]: Connection closed by 139.178.68.195 port 36508 May 27 03:58:35.001614 sshd-session[5596]: pam_unix(sshd:session): session closed for user core May 27 03:58:35.005881 systemd[1]: sshd@22-172.237.145.45:22-139.178.68.195:36508.service: Deactivated successfully. May 27 03:58:35.008831 systemd[1]: session-23.scope: Deactivated successfully. May 27 03:58:35.010321 systemd-logind[1546]: Session 23 logged out. Waiting for processes to exit. May 27 03:58:35.012223 systemd-logind[1546]: Removed session 23. May 27 03:58:39.390834 containerd[1570]: time="2025-05-27T03:58:39.390791979Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9e73ba597a7d1bf517c7e6779b497bc7933bb786763613c9a7908b27f2aa4613\" id:\"fd5305d0c478de8ab0febcde3350f199c3e2a1867dccb4e8b8011ef346e12c82\" pid:5627 exited_at:{seconds:1748318319 nanos:390454406}" May 27 03:58:40.060518 systemd[1]: Started sshd@23-172.237.145.45:22-139.178.68.195:36518.service - OpenSSH per-connection server daemon (139.178.68.195:36518). May 27 03:58:40.386085 sshd[5640]: Accepted publickey for core from 139.178.68.195 port 36518 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:58:40.387600 sshd-session[5640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:58:40.392832 systemd-logind[1546]: New session 24 of user core. May 27 03:58:40.397307 systemd[1]: Started session-24.scope - Session 24 of User core. May 27 03:58:40.681350 sshd[5642]: Connection closed by 139.178.68.195 port 36518 May 27 03:58:40.682138 sshd-session[5640]: pam_unix(sshd:session): session closed for user core May 27 03:58:40.686601 systemd[1]: sshd@23-172.237.145.45:22-139.178.68.195:36518.service: Deactivated successfully. May 27 03:58:40.688733 systemd[1]: session-24.scope: Deactivated successfully. May 27 03:58:40.690650 systemd-logind[1546]: Session 24 logged out. Waiting for processes to exit. May 27 03:58:40.692466 systemd-logind[1546]: Removed session 24. May 27 03:58:40.751441 kubelet[2692]: E0527 03:58:40.751396 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6cdf9fdb77-bk2d7" podUID="dc5ba6bb-b92b-4490-90f8-193e957a2beb" May 27 03:58:43.750252 kubelet[2692]: E0527 03:58:43.749764 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:58:45.749496 systemd[1]: Started sshd@24-172.237.145.45:22-139.178.68.195:49810.service - OpenSSH per-connection server daemon (139.178.68.195:49810). May 27 03:58:46.087974 sshd[5656]: Accepted publickey for core from 139.178.68.195 port 49810 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:58:46.089144 sshd-session[5656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:58:46.093837 systemd-logind[1546]: New session 25 of user core. May 27 03:58:46.105326 systemd[1]: Started session-25.scope - Session 25 of User core. May 27 03:58:46.388754 sshd[5658]: Connection closed by 139.178.68.195 port 49810 May 27 03:58:46.389636 sshd-session[5656]: pam_unix(sshd:session): session closed for user core May 27 03:58:46.394255 systemd[1]: sshd@24-172.237.145.45:22-139.178.68.195:49810.service: Deactivated successfully. May 27 03:58:46.397880 systemd[1]: session-25.scope: Deactivated successfully. May 27 03:58:46.399081 systemd-logind[1546]: Session 25 logged out. Waiting for processes to exit. May 27 03:58:46.400987 systemd-logind[1546]: Removed session 25. May 27 03:58:48.750594 kubelet[2692]: E0527 03:58:48.750544 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-n8xhd" podUID="305d83db-a282-47d8-aae7-29f765f39c27" May 27 03:58:51.450526 systemd[1]: Started sshd@25-172.237.145.45:22-139.178.68.195:49824.service - OpenSSH per-connection server daemon (139.178.68.195:49824). May 27 03:58:51.751249 containerd[1570]: time="2025-05-27T03:58:51.751212430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 03:58:51.797530 sshd[5670]: Accepted publickey for core from 139.178.68.195 port 49824 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:58:51.799963 sshd-session[5670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:58:51.805243 systemd-logind[1546]: New session 26 of user core. May 27 03:58:51.810373 systemd[1]: Started session-26.scope - Session 26 of User core. May 27 03:58:51.849245 containerd[1570]: time="2025-05-27T03:58:51.849173318Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:58:51.850164 containerd[1570]: time="2025-05-27T03:58:51.850112338Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:58:51.850298 containerd[1570]: time="2025-05-27T03:58:51.850173580Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 03:58:51.850358 kubelet[2692]: E0527 03:58:51.850296 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:58:51.850758 kubelet[2692]: E0527 03:58:51.850367 2692 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:58:51.850758 kubelet[2692]: E0527 03:58:51.850473 2692 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a533b5daea634b52a7b2eac27807f9ca,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dmhc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6cdf9fdb77-bk2d7_calico-system(dc5ba6bb-b92b-4490-90f8-193e957a2beb): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:58:51.852289 containerd[1570]: time="2025-05-27T03:58:51.852268945Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 03:58:51.945976 containerd[1570]: time="2025-05-27T03:58:51.945900807Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:58:51.946835 containerd[1570]: time="2025-05-27T03:58:51.946779684Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:58:51.946920 containerd[1570]: time="2025-05-27T03:58:51.946810235Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 03:58:51.947158 kubelet[2692]: E0527 03:58:51.947095 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:58:51.947397 kubelet[2692]: E0527 03:58:51.947170 2692 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:58:51.947397 kubelet[2692]: E0527 03:58:51.947342 2692 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dmhc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6cdf9fdb77-bk2d7_calico-system(dc5ba6bb-b92b-4490-90f8-193e957a2beb): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:58:51.948753 kubelet[2692]: E0527 03:58:51.948690 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6cdf9fdb77-bk2d7" podUID="dc5ba6bb-b92b-4490-90f8-193e957a2beb" May 27 03:58:52.108861 sshd[5672]: Connection closed by 139.178.68.195 port 49824 May 27 03:58:52.109039 sshd-session[5670]: pam_unix(sshd:session): session closed for user core May 27 03:58:52.115269 systemd[1]: sshd@25-172.237.145.45:22-139.178.68.195:49824.service: Deactivated successfully. May 27 03:58:52.118302 systemd[1]: session-26.scope: Deactivated successfully. May 27 03:58:52.119705 systemd-logind[1546]: Session 26 logged out. Waiting for processes to exit. May 27 03:58:52.121950 systemd-logind[1546]: Removed session 26. May 27 03:58:53.310967 containerd[1570]: time="2025-05-27T03:58:53.310918221Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3655c033461662029d67b2b9a60a72ea8718e993779084ac26b54a5243af8b96\" id:\"1b5587db7b12f1f642ef95162ba0b06c5a1b35abd8588c32e74a8ab6e2cd93cb\" pid:5696 exited_at:{seconds:1748318333 nanos:310754546}" May 27 03:58:57.169181 systemd[1]: Started sshd@26-172.237.145.45:22-139.178.68.195:53400.service - OpenSSH per-connection server daemon (139.178.68.195:53400). May 27 03:58:57.508329 sshd[5706]: Accepted publickey for core from 139.178.68.195 port 53400 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:58:57.509844 sshd-session[5706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:58:57.515247 systemd-logind[1546]: New session 27 of user core. May 27 03:58:57.519339 systemd[1]: Started session-27.scope - Session 27 of User core. May 27 03:58:57.813684 sshd[5708]: Connection closed by 139.178.68.195 port 53400 May 27 03:58:57.814660 sshd-session[5706]: pam_unix(sshd:session): session closed for user core May 27 03:58:57.819555 systemd-logind[1546]: Session 27 logged out. Waiting for processes to exit. May 27 03:58:57.820577 systemd[1]: sshd@26-172.237.145.45:22-139.178.68.195:53400.service: Deactivated successfully. May 27 03:58:57.823664 systemd[1]: session-27.scope: Deactivated successfully. May 27 03:58:57.826637 systemd-logind[1546]: Removed session 27. May 27 03:59:00.750967 containerd[1570]: time="2025-05-27T03:59:00.750888152Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 03:59:00.848034 containerd[1570]: time="2025-05-27T03:59:00.847989795Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:59:00.848930 containerd[1570]: time="2025-05-27T03:59:00.848894740Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:59:00.848930 containerd[1570]: time="2025-05-27T03:59:00.848952332Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 03:59:00.849119 kubelet[2692]: E0527 03:59:00.849084 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:59:00.849442 kubelet[2692]: E0527 03:59:00.849127 2692 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:59:00.849442 kubelet[2692]: E0527 03:59:00.849300 2692 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8hxgn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-n8xhd_calico-system(305d83db-a282-47d8-aae7-29f765f39c27): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:59:00.850588 kubelet[2692]: E0527 03:59:00.850517 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-n8xhd" podUID="305d83db-a282-47d8-aae7-29f765f39c27" May 27 03:59:02.875656 systemd[1]: Started sshd@27-172.237.145.45:22-139.178.68.195:53402.service - OpenSSH per-connection server daemon (139.178.68.195:53402). May 27 03:59:03.209869 sshd[5742]: Accepted publickey for core from 139.178.68.195 port 53402 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:59:03.211521 sshd-session[5742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:59:03.216904 systemd-logind[1546]: New session 28 of user core. May 27 03:59:03.225304 systemd[1]: Started session-28.scope - Session 28 of User core. May 27 03:59:03.509683 sshd[5744]: Connection closed by 139.178.68.195 port 53402 May 27 03:59:03.510509 sshd-session[5742]: pam_unix(sshd:session): session closed for user core May 27 03:59:03.514989 systemd-logind[1546]: Session 28 logged out. Waiting for processes to exit. May 27 03:59:03.515929 systemd[1]: sshd@27-172.237.145.45:22-139.178.68.195:53402.service: Deactivated successfully. May 27 03:59:03.518527 systemd[1]: session-28.scope: Deactivated successfully. May 27 03:59:03.520126 systemd-logind[1546]: Removed session 28. May 27 03:59:03.752713 kubelet[2692]: E0527 03:59:03.752655 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6cdf9fdb77-bk2d7" podUID="dc5ba6bb-b92b-4490-90f8-193e957a2beb" May 27 03:59:03.968389 containerd[1570]: time="2025-05-27T03:59:03.968078170Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3655c033461662029d67b2b9a60a72ea8718e993779084ac26b54a5243af8b96\" id:\"9418036e7511dbf491b83c82e9daf4bd7d3886f86c60a785b1e910ebd6d1a1a2\" pid:5767 exited_at:{seconds:1748318343 nanos:967816804}" May 27 03:59:05.750171 kubelet[2692]: E0527 03:59:05.749890 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:59:08.573707 systemd[1]: Started sshd@28-172.237.145.45:22-139.178.68.195:34798.service - OpenSSH per-connection server daemon (139.178.68.195:34798). May 27 03:59:08.916596 sshd[5777]: Accepted publickey for core from 139.178.68.195 port 34798 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:59:08.918031 sshd-session[5777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:59:08.923843 systemd-logind[1546]: New session 29 of user core. May 27 03:59:08.935372 systemd[1]: Started session-29.scope - Session 29 of User core. May 27 03:59:09.222226 sshd[5779]: Connection closed by 139.178.68.195 port 34798 May 27 03:59:09.222947 sshd-session[5777]: pam_unix(sshd:session): session closed for user core May 27 03:59:09.227140 systemd[1]: sshd@28-172.237.145.45:22-139.178.68.195:34798.service: Deactivated successfully. May 27 03:59:09.229758 systemd[1]: session-29.scope: Deactivated successfully. May 27 03:59:09.231811 systemd-logind[1546]: Session 29 logged out. Waiting for processes to exit. May 27 03:59:09.233440 systemd-logind[1546]: Removed session 29. May 27 03:59:09.393949 containerd[1570]: time="2025-05-27T03:59:09.393873965Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9e73ba597a7d1bf517c7e6779b497bc7933bb786763613c9a7908b27f2aa4613\" id:\"d04afea754a05495339dc76a10d958d3ae696c643504cde53811a84570271f2e\" pid:5803 exited_at:{seconds:1748318349 nanos:393512996}" May 27 03:59:11.749885 kubelet[2692]: E0527 03:59:11.749839 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:59:14.291148 systemd[1]: Started sshd@29-172.237.145.45:22-139.178.68.195:45370.service - OpenSSH per-connection server daemon (139.178.68.195:45370). May 27 03:59:14.632832 sshd[5815]: Accepted publickey for core from 139.178.68.195 port 45370 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:59:14.634595 sshd-session[5815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:59:14.640725 systemd-logind[1546]: New session 30 of user core. May 27 03:59:14.644590 systemd[1]: Started session-30.scope - Session 30 of User core. May 27 03:59:14.751031 kubelet[2692]: E0527 03:59:14.750990 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-n8xhd" podUID="305d83db-a282-47d8-aae7-29f765f39c27" May 27 03:59:14.935431 sshd[5817]: Connection closed by 139.178.68.195 port 45370 May 27 03:59:14.936304 sshd-session[5815]: pam_unix(sshd:session): session closed for user core May 27 03:59:14.940475 systemd[1]: sshd@29-172.237.145.45:22-139.178.68.195:45370.service: Deactivated successfully. May 27 03:59:14.943066 systemd[1]: session-30.scope: Deactivated successfully. May 27 03:59:14.944548 systemd-logind[1546]: Session 30 logged out. Waiting for processes to exit. May 27 03:59:14.946518 systemd-logind[1546]: Removed session 30. May 27 03:59:15.752453 kubelet[2692]: E0527 03:59:15.752148 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6cdf9fdb77-bk2d7" podUID="dc5ba6bb-b92b-4490-90f8-193e957a2beb" May 27 03:59:20.001392 systemd[1]: Started sshd@30-172.237.145.45:22-139.178.68.195:45374.service - OpenSSH per-connection server daemon (139.178.68.195:45374). May 27 03:59:20.338655 sshd[5829]: Accepted publickey for core from 139.178.68.195 port 45374 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:59:20.340645 sshd-session[5829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:59:20.346232 systemd-logind[1546]: New session 31 of user core. May 27 03:59:20.351302 systemd[1]: Started session-31.scope - Session 31 of User core. May 27 03:59:20.644488 sshd[5832]: Connection closed by 139.178.68.195 port 45374 May 27 03:59:20.646374 sshd-session[5829]: pam_unix(sshd:session): session closed for user core May 27 03:59:20.650838 systemd[1]: sshd@30-172.237.145.45:22-139.178.68.195:45374.service: Deactivated successfully. May 27 03:59:20.652734 systemd[1]: session-31.scope: Deactivated successfully. May 27 03:59:20.653942 systemd-logind[1546]: Session 31 logged out. Waiting for processes to exit. May 27 03:59:20.656474 systemd-logind[1546]: Removed session 31. May 27 03:59:25.711257 systemd[1]: Started sshd@31-172.237.145.45:22-139.178.68.195:40374.service - OpenSSH per-connection server daemon (139.178.68.195:40374). May 27 03:59:25.751422 kubelet[2692]: E0527 03:59:25.750420 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:59:26.051497 sshd[5847]: Accepted publickey for core from 139.178.68.195 port 40374 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:59:26.053002 sshd-session[5847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:59:26.058124 systemd-logind[1546]: New session 32 of user core. May 27 03:59:26.065315 systemd[1]: Started session-32.scope - Session 32 of User core. May 27 03:59:26.349594 sshd[5849]: Connection closed by 139.178.68.195 port 40374 May 27 03:59:26.350436 sshd-session[5847]: pam_unix(sshd:session): session closed for user core May 27 03:59:26.355049 systemd-logind[1546]: Session 32 logged out. Waiting for processes to exit. May 27 03:59:26.355943 systemd[1]: sshd@31-172.237.145.45:22-139.178.68.195:40374.service: Deactivated successfully. May 27 03:59:26.360688 systemd[1]: session-32.scope: Deactivated successfully. May 27 03:59:26.362455 systemd-logind[1546]: Removed session 32. May 27 03:59:26.751587 kubelet[2692]: E0527 03:59:26.751543 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-n8xhd" podUID="305d83db-a282-47d8-aae7-29f765f39c27" May 27 03:59:28.750239 kubelet[2692]: E0527 03:59:28.750175 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:59:28.753075 kubelet[2692]: E0527 03:59:28.752768 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6cdf9fdb77-bk2d7" podUID="dc5ba6bb-b92b-4490-90f8-193e957a2beb" May 27 03:59:31.420084 systemd[1]: Started sshd@32-172.237.145.45:22-139.178.68.195:40380.service - OpenSSH per-connection server daemon (139.178.68.195:40380). May 27 03:59:31.749460 sshd[5864]: Accepted publickey for core from 139.178.68.195 port 40380 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:59:31.752368 sshd-session[5864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:59:31.758433 systemd-logind[1546]: New session 33 of user core. May 27 03:59:31.764407 systemd[1]: Started session-33.scope - Session 33 of User core. May 27 03:59:32.040323 sshd[5866]: Connection closed by 139.178.68.195 port 40380 May 27 03:59:32.040885 sshd-session[5864]: pam_unix(sshd:session): session closed for user core May 27 03:59:32.043893 systemd[1]: sshd@32-172.237.145.45:22-139.178.68.195:40380.service: Deactivated successfully. May 27 03:59:32.046249 systemd[1]: session-33.scope: Deactivated successfully. May 27 03:59:32.048248 systemd-logind[1546]: Session 33 logged out. Waiting for processes to exit. May 27 03:59:32.049940 systemd-logind[1546]: Removed session 33. May 27 03:59:33.970822 containerd[1570]: time="2025-05-27T03:59:33.970781906Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3655c033461662029d67b2b9a60a72ea8718e993779084ac26b54a5243af8b96\" id:\"b01a9fa3d5f56bf3d317ae509584ce15dc54a2348ca2dbaad3bf5fa7088a5f90\" pid:5889 exited_at:{seconds:1748318373 nanos:970456989}" May 27 03:59:37.105854 systemd[1]: Started sshd@33-172.237.145.45:22-139.178.68.195:39304.service - OpenSSH per-connection server daemon (139.178.68.195:39304). May 27 03:59:37.439793 sshd[5899]: Accepted publickey for core from 139.178.68.195 port 39304 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:59:37.441478 sshd-session[5899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:59:37.445920 systemd-logind[1546]: New session 34 of user core. May 27 03:59:37.450522 systemd[1]: Started session-34.scope - Session 34 of User core. May 27 03:59:37.732069 sshd[5901]: Connection closed by 139.178.68.195 port 39304 May 27 03:59:37.733104 sshd-session[5899]: pam_unix(sshd:session): session closed for user core May 27 03:59:37.737858 systemd[1]: sshd@33-172.237.145.45:22-139.178.68.195:39304.service: Deactivated successfully. May 27 03:59:37.740609 systemd[1]: session-34.scope: Deactivated successfully. May 27 03:59:37.742464 systemd-logind[1546]: Session 34 logged out. Waiting for processes to exit. May 27 03:59:37.744718 systemd-logind[1546]: Removed session 34. May 27 03:59:38.750198 kubelet[2692]: E0527 03:59:38.750136 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-n8xhd" podUID="305d83db-a282-47d8-aae7-29f765f39c27" May 27 03:59:39.392353 containerd[1570]: time="2025-05-27T03:59:39.392316533Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9e73ba597a7d1bf517c7e6779b497bc7933bb786763613c9a7908b27f2aa4613\" id:\"128436864047d8128f13f7ab52022446f5a2c7a024b2b7b07e4c644134b5e6d3\" pid:5924 exited_at:{seconds:1748318379 nanos:392102180}" May 27 03:59:41.750678 kubelet[2692]: E0527 03:59:41.750151 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:59:42.790927 systemd[1]: Started sshd@34-172.237.145.45:22-139.178.68.195:39310.service - OpenSSH per-connection server daemon (139.178.68.195:39310). May 27 03:59:43.123579 sshd[5937]: Accepted publickey for core from 139.178.68.195 port 39310 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:59:43.125590 sshd-session[5937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:59:43.131533 systemd-logind[1546]: New session 35 of user core. May 27 03:59:43.136309 systemd[1]: Started session-35.scope - Session 35 of User core. May 27 03:59:43.423440 sshd[5939]: Connection closed by 139.178.68.195 port 39310 May 27 03:59:43.424275 sshd-session[5937]: pam_unix(sshd:session): session closed for user core May 27 03:59:43.429075 systemd[1]: sshd@34-172.237.145.45:22-139.178.68.195:39310.service: Deactivated successfully. May 27 03:59:43.431919 systemd[1]: session-35.scope: Deactivated successfully. May 27 03:59:43.433999 systemd-logind[1546]: Session 35 logged out. Waiting for processes to exit. May 27 03:59:43.435629 systemd-logind[1546]: Removed session 35. May 27 03:59:43.751846 kubelet[2692]: E0527 03:59:43.751789 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 03:59:43.754059 kubelet[2692]: E0527 03:59:43.754008 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6cdf9fdb77-bk2d7" podUID="dc5ba6bb-b92b-4490-90f8-193e957a2beb" May 27 03:59:48.490242 systemd[1]: Started sshd@35-172.237.145.45:22-139.178.68.195:49012.service - OpenSSH per-connection server daemon (139.178.68.195:49012). May 27 03:59:48.829956 sshd[5951]: Accepted publickey for core from 139.178.68.195 port 49012 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:59:48.831802 sshd-session[5951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:59:48.839693 systemd-logind[1546]: New session 36 of user core. May 27 03:59:48.845348 systemd[1]: Started session-36.scope - Session 36 of User core. May 27 03:59:49.147361 sshd[5953]: Connection closed by 139.178.68.195 port 49012 May 27 03:59:49.148336 sshd-session[5951]: pam_unix(sshd:session): session closed for user core May 27 03:59:49.153355 systemd-logind[1546]: Session 36 logged out. Waiting for processes to exit. May 27 03:59:49.154111 systemd[1]: sshd@35-172.237.145.45:22-139.178.68.195:49012.service: Deactivated successfully. May 27 03:59:49.156742 systemd[1]: session-36.scope: Deactivated successfully. May 27 03:59:49.159529 systemd-logind[1546]: Removed session 36. May 27 03:59:50.751151 kubelet[2692]: E0527 03:59:50.751100 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-n8xhd" podUID="305d83db-a282-47d8-aae7-29f765f39c27" May 27 03:59:53.316382 containerd[1570]: time="2025-05-27T03:59:53.316342811Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3655c033461662029d67b2b9a60a72ea8718e993779084ac26b54a5243af8b96\" id:\"6482e679aa9ecef11e0725594e98376f6cc5ab6c87d332290f7d50cabd514404\" pid:5977 exited_at:{seconds:1748318393 nanos:315918144}" May 27 03:59:54.208111 systemd[1]: Started sshd@36-172.237.145.45:22-139.178.68.195:40930.service - OpenSSH per-connection server daemon (139.178.68.195:40930). May 27 03:59:54.543425 sshd[5988]: Accepted publickey for core from 139.178.68.195 port 40930 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:59:54.544966 sshd-session[5988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:59:54.550629 systemd-logind[1546]: New session 37 of user core. May 27 03:59:54.555396 systemd[1]: Started session-37.scope - Session 37 of User core. May 27 03:59:54.847360 sshd[5990]: Connection closed by 139.178.68.195 port 40930 May 27 03:59:54.848215 sshd-session[5988]: pam_unix(sshd:session): session closed for user core May 27 03:59:54.852277 systemd-logind[1546]: Session 37 logged out. Waiting for processes to exit. May 27 03:59:54.852926 systemd[1]: sshd@36-172.237.145.45:22-139.178.68.195:40930.service: Deactivated successfully. May 27 03:59:54.855179 systemd[1]: session-37.scope: Deactivated successfully. May 27 03:59:54.857067 systemd-logind[1546]: Removed session 37. May 27 03:59:58.751300 kubelet[2692]: E0527 03:59:58.751234 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6cdf9fdb77-bk2d7" podUID="dc5ba6bb-b92b-4490-90f8-193e957a2beb" May 27 03:59:59.911038 systemd[1]: Started sshd@37-172.237.145.45:22-139.178.68.195:40942.service - OpenSSH per-connection server daemon (139.178.68.195:40942). May 27 04:00:00.250059 sshd[6004]: Accepted publickey for core from 139.178.68.195 port 40942 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 04:00:00.251467 sshd-session[6004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:00:00.256091 systemd-logind[1546]: New session 38 of user core. May 27 04:00:00.261309 systemd[1]: Started session-38.scope - Session 38 of User core. May 27 04:00:00.554475 sshd[6006]: Connection closed by 139.178.68.195 port 40942 May 27 04:00:00.554944 sshd-session[6004]: pam_unix(sshd:session): session closed for user core May 27 04:00:00.559954 systemd[1]: sshd@37-172.237.145.45:22-139.178.68.195:40942.service: Deactivated successfully. May 27 04:00:00.562696 systemd[1]: session-38.scope: Deactivated successfully. May 27 04:00:00.563770 systemd-logind[1546]: Session 38 logged out. Waiting for processes to exit. May 27 04:00:00.565698 systemd-logind[1546]: Removed session 38. May 27 04:00:02.752037 kubelet[2692]: E0527 04:00:02.751960 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 04:00:03.968831 containerd[1570]: time="2025-05-27T04:00:03.968791098Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3655c033461662029d67b2b9a60a72ea8718e993779084ac26b54a5243af8b96\" id:\"4171a5e4adcf5b643a1d49fa1428d534bf867991a3728630375d3afcf0c223cf\" pid:6029 exited_at:{seconds:1748318403 nanos:968611476}" May 27 04:00:05.615203 systemd[1]: Started sshd@38-172.237.145.45:22-139.178.68.195:44104.service - OpenSSH per-connection server daemon (139.178.68.195:44104). May 27 04:00:05.751581 kubelet[2692]: E0527 04:00:05.751519 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-n8xhd" podUID="305d83db-a282-47d8-aae7-29f765f39c27" May 27 04:00:05.950569 sshd[6040]: Accepted publickey for core from 139.178.68.195 port 44104 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 04:00:05.952114 sshd-session[6040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:00:05.958003 systemd-logind[1546]: New session 39 of user core. May 27 04:00:05.963325 systemd[1]: Started session-39.scope - Session 39 of User core. May 27 04:00:06.243851 sshd[6042]: Connection closed by 139.178.68.195 port 44104 May 27 04:00:06.245070 sshd-session[6040]: pam_unix(sshd:session): session closed for user core May 27 04:00:06.249130 systemd-logind[1546]: Session 39 logged out. Waiting for processes to exit. May 27 04:00:06.249682 systemd[1]: sshd@38-172.237.145.45:22-139.178.68.195:44104.service: Deactivated successfully. May 27 04:00:06.251602 systemd[1]: session-39.scope: Deactivated successfully. May 27 04:00:06.253643 systemd-logind[1546]: Removed session 39. May 27 04:00:09.389888 containerd[1570]: time="2025-05-27T04:00:09.389853466Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9e73ba597a7d1bf517c7e6779b497bc7933bb786763613c9a7908b27f2aa4613\" id:\"ec18ae925bd409edf2958e4b083f048acbed8fe8c910e4dd13cbdcd831678611\" pid:6066 exited_at:{seconds:1748318409 nanos:389594172}" May 27 04:00:11.308067 systemd[1]: Started sshd@39-172.237.145.45:22-139.178.68.195:44114.service - OpenSSH per-connection server daemon (139.178.68.195:44114). May 27 04:00:11.652044 sshd[6079]: Accepted publickey for core from 139.178.68.195 port 44114 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 04:00:11.654181 sshd-session[6079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:00:11.660683 systemd-logind[1546]: New session 40 of user core. May 27 04:00:11.665344 systemd[1]: Started session-40.scope - Session 40 of User core. May 27 04:00:11.752880 kubelet[2692]: E0527 04:00:11.752834 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6cdf9fdb77-bk2d7" podUID="dc5ba6bb-b92b-4490-90f8-193e957a2beb" May 27 04:00:11.960118 sshd[6081]: Connection closed by 139.178.68.195 port 44114 May 27 04:00:11.960996 sshd-session[6079]: pam_unix(sshd:session): session closed for user core May 27 04:00:11.964354 systemd[1]: sshd@39-172.237.145.45:22-139.178.68.195:44114.service: Deactivated successfully. May 27 04:00:11.966894 systemd[1]: session-40.scope: Deactivated successfully. May 27 04:00:11.969172 systemd-logind[1546]: Session 40 logged out. Waiting for processes to exit. May 27 04:00:11.970585 systemd-logind[1546]: Removed session 40. May 27 04:00:14.749793 kubelet[2692]: E0527 04:00:14.749763 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 04:00:17.015784 systemd[1]: Started sshd@40-172.237.145.45:22-139.178.68.195:42106.service - OpenSSH per-connection server daemon (139.178.68.195:42106). May 27 04:00:17.207921 containerd[1570]: time="2025-05-27T04:00:17.207847549Z" level=warning msg="container event discarded" container=908ad7f4cef470eed4a14e0e4c106b4704de160683bcc45beedc20eb3b60342b type=CONTAINER_CREATED_EVENT May 27 04:00:17.219285 containerd[1570]: time="2025-05-27T04:00:17.219257724Z" level=warning msg="container event discarded" container=908ad7f4cef470eed4a14e0e4c106b4704de160683bcc45beedc20eb3b60342b type=CONTAINER_STARTED_EVENT May 27 04:00:17.219386 containerd[1570]: time="2025-05-27T04:00:17.219291854Z" level=warning msg="container event discarded" container=80f35fe96d59b2604dc71d60de87b73cb2d994c29cf234f2e5359cb5f5bce286 type=CONTAINER_CREATED_EVENT May 27 04:00:17.219386 containerd[1570]: time="2025-05-27T04:00:17.219301845Z" level=warning msg="container event discarded" container=80f35fe96d59b2604dc71d60de87b73cb2d994c29cf234f2e5359cb5f5bce286 type=CONTAINER_STARTED_EVENT May 27 04:00:17.241499 containerd[1570]: time="2025-05-27T04:00:17.241458954Z" level=warning msg="container event discarded" container=186e06de3be321718ab1948f083cc20d89999743fa3562c83afcc473f03ebea7 type=CONTAINER_CREATED_EVENT May 27 04:00:17.241499 containerd[1570]: time="2025-05-27T04:00:17.241484215Z" level=warning msg="container event discarded" container=186e06de3be321718ab1948f083cc20d89999743fa3562c83afcc473f03ebea7 type=CONTAINER_STARTED_EVENT May 27 04:00:17.252736 containerd[1570]: time="2025-05-27T04:00:17.252707646Z" level=warning msg="container event discarded" container=d0d1911bdd1a62b2453d338c3b3383ec6c337c6e3b3c1489cc833793dcad07da type=CONTAINER_CREATED_EVENT May 27 04:00:17.252736 containerd[1570]: time="2025-05-27T04:00:17.252728436Z" level=warning msg="container event discarded" container=45284ef4c091db5e186be327cd5454b4ce1ea10affd12e6d03396bb56cfc4b17 type=CONTAINER_CREATED_EVENT May 27 04:00:17.266075 containerd[1570]: time="2025-05-27T04:00:17.266011786Z" level=warning msg="container event discarded" container=9da2a576c6c9f8f6dea541a37aa2213b61de786601534b446f436579021984f4 type=CONTAINER_CREATED_EVENT May 27 04:00:17.343699 sshd[6093]: Accepted publickey for core from 139.178.68.195 port 42106 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 04:00:17.345120 sshd-session[6093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:00:17.350344 systemd-logind[1546]: New session 41 of user core. May 27 04:00:17.356307 systemd[1]: Started session-41.scope - Session 41 of User core. May 27 04:00:17.383443 containerd[1570]: time="2025-05-27T04:00:17.383398113Z" level=warning msg="container event discarded" container=d0d1911bdd1a62b2453d338c3b3383ec6c337c6e3b3c1489cc833793dcad07da type=CONTAINER_STARTED_EVENT May 27 04:00:17.383443 containerd[1570]: time="2025-05-27T04:00:17.383437624Z" level=warning msg="container event discarded" container=45284ef4c091db5e186be327cd5454b4ce1ea10affd12e6d03396bb56cfc4b17 type=CONTAINER_STARTED_EVENT May 27 04:00:17.418652 containerd[1570]: time="2025-05-27T04:00:17.418611409Z" level=warning msg="container event discarded" container=9da2a576c6c9f8f6dea541a37aa2213b61de786601534b446f436579021984f4 type=CONTAINER_STARTED_EVENT May 27 04:00:17.640439 sshd[6095]: Connection closed by 139.178.68.195 port 42106 May 27 04:00:17.641141 sshd-session[6093]: pam_unix(sshd:session): session closed for user core May 27 04:00:17.646242 systemd[1]: sshd@40-172.237.145.45:22-139.178.68.195:42106.service: Deactivated successfully. May 27 04:00:17.648793 systemd[1]: session-41.scope: Deactivated successfully. May 27 04:00:17.649705 systemd-logind[1546]: Session 41 logged out. Waiting for processes to exit. May 27 04:00:17.652333 systemd-logind[1546]: Removed session 41. May 27 04:00:19.752550 kubelet[2692]: E0527 04:00:19.752498 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-n8xhd" podUID="305d83db-a282-47d8-aae7-29f765f39c27" May 27 04:00:22.708706 systemd[1]: Started sshd@41-172.237.145.45:22-139.178.68.195:42116.service - OpenSSH per-connection server daemon (139.178.68.195:42116). May 27 04:00:23.056559 sshd[6109]: Accepted publickey for core from 139.178.68.195 port 42116 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 04:00:23.058037 sshd-session[6109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:00:23.062937 systemd-logind[1546]: New session 42 of user core. May 27 04:00:23.067325 systemd[1]: Started session-42.scope - Session 42 of User core. May 27 04:00:23.364457 sshd[6111]: Connection closed by 139.178.68.195 port 42116 May 27 04:00:23.365320 sshd-session[6109]: pam_unix(sshd:session): session closed for user core May 27 04:00:23.370712 systemd-logind[1546]: Session 42 logged out. Waiting for processes to exit. May 27 04:00:23.371768 systemd[1]: sshd@41-172.237.145.45:22-139.178.68.195:42116.service: Deactivated successfully. May 27 04:00:23.374064 systemd[1]: session-42.scope: Deactivated successfully. May 27 04:00:23.376630 systemd-logind[1546]: Removed session 42. May 27 04:00:23.754016 kubelet[2692]: E0527 04:00:23.753948 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6cdf9fdb77-bk2d7" podUID="dc5ba6bb-b92b-4490-90f8-193e957a2beb" May 27 04:00:28.349728 containerd[1570]: time="2025-05-27T04:00:28.349671291Z" level=warning msg="container event discarded" container=0829b5171056b03188a8e30acb52b37a68c939c18f4efc610b2ddaac8665163a type=CONTAINER_CREATED_EVENT May 27 04:00:28.349728 containerd[1570]: time="2025-05-27T04:00:28.349713221Z" level=warning msg="container event discarded" container=0829b5171056b03188a8e30acb52b37a68c939c18f4efc610b2ddaac8665163a type=CONTAINER_STARTED_EVENT May 27 04:00:28.377472 containerd[1570]: time="2025-05-27T04:00:28.377386558Z" level=warning msg="container event discarded" container=8f931c8afd5c7bb5c7bc15a16108bfd012b28dc20aff025ec4c194ba3147c220 type=CONTAINER_CREATED_EVENT May 27 04:00:28.433617 systemd[1]: Started sshd@42-172.237.145.45:22-139.178.68.195:47704.service - OpenSSH per-connection server daemon (139.178.68.195:47704). May 27 04:00:28.466795 containerd[1570]: time="2025-05-27T04:00:28.466738999Z" level=warning msg="container event discarded" container=8f931c8afd5c7bb5c7bc15a16108bfd012b28dc20aff025ec4c194ba3147c220 type=CONTAINER_STARTED_EVENT May 27 04:00:28.493976 containerd[1570]: time="2025-05-27T04:00:28.493931629Z" level=warning msg="container event discarded" container=89f3b6077d06d1fca9c9b2eb6951e376630d0212c3efa0ac6424764543fee57d type=CONTAINER_CREATED_EVENT May 27 04:00:28.493976 containerd[1570]: time="2025-05-27T04:00:28.493967410Z" level=warning msg="container event discarded" container=89f3b6077d06d1fca9c9b2eb6951e376630d0212c3efa0ac6424764543fee57d type=CONTAINER_STARTED_EVENT May 27 04:00:28.777140 sshd[6124]: Accepted publickey for core from 139.178.68.195 port 47704 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 04:00:28.778936 sshd-session[6124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:00:28.785279 systemd-logind[1546]: New session 43 of user core. May 27 04:00:28.789334 systemd[1]: Started session-43.scope - Session 43 of User core. May 27 04:00:29.080596 sshd[6128]: Connection closed by 139.178.68.195 port 47704 May 27 04:00:29.080852 sshd-session[6124]: pam_unix(sshd:session): session closed for user core May 27 04:00:29.086085 systemd-logind[1546]: Session 43 logged out. Waiting for processes to exit. May 27 04:00:29.087004 systemd[1]: sshd@42-172.237.145.45:22-139.178.68.195:47704.service: Deactivated successfully. May 27 04:00:29.089736 systemd[1]: session-43.scope: Deactivated successfully. May 27 04:00:29.091861 systemd-logind[1546]: Removed session 43. May 27 04:00:30.240790 containerd[1570]: time="2025-05-27T04:00:30.240722665Z" level=warning msg="container event discarded" container=88ceba7ca898b461c13890ee172d0d61bdd5ef2e2f8626fea0d102685b9d4a32 type=CONTAINER_CREATED_EVENT May 27 04:00:30.297117 containerd[1570]: time="2025-05-27T04:00:30.297069251Z" level=warning msg="container event discarded" container=88ceba7ca898b461c13890ee172d0d61bdd5ef2e2f8626fea0d102685b9d4a32 type=CONTAINER_STARTED_EVENT May 27 04:00:31.750383 kubelet[2692]: E0527 04:00:31.750338 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-n8xhd" podUID="305d83db-a282-47d8-aae7-29f765f39c27" May 27 04:00:33.965422 containerd[1570]: time="2025-05-27T04:00:33.965387591Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3655c033461662029d67b2b9a60a72ea8718e993779084ac26b54a5243af8b96\" id:\"631bee15d687748ff85245c677501c61a7e66a0a88f07994bb4a9ea58ebead84\" pid:6172 exited_at:{seconds:1748318433 nanos:965204498}" May 27 04:00:34.141960 systemd[1]: Started sshd@43-172.237.145.45:22-139.178.68.195:37980.service - OpenSSH per-connection server daemon (139.178.68.195:37980). May 27 04:00:34.483432 sshd[6182]: Accepted publickey for core from 139.178.68.195 port 37980 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 04:00:34.485953 sshd-session[6182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:00:34.496412 systemd-logind[1546]: New session 44 of user core. May 27 04:00:34.503316 systemd[1]: Started session-44.scope - Session 44 of User core. May 27 04:00:34.795308 sshd[6184]: Connection closed by 139.178.68.195 port 37980 May 27 04:00:34.798348 sshd-session[6182]: pam_unix(sshd:session): session closed for user core May 27 04:00:34.802237 systemd-logind[1546]: Session 44 logged out. Waiting for processes to exit. May 27 04:00:34.804182 systemd[1]: sshd@43-172.237.145.45:22-139.178.68.195:37980.service: Deactivated successfully. May 27 04:00:34.807622 systemd[1]: session-44.scope: Deactivated successfully. May 27 04:00:34.811044 systemd-logind[1546]: Removed session 44. May 27 04:00:38.749639 kubelet[2692]: E0527 04:00:38.749462 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 04:00:38.751241 kubelet[2692]: E0527 04:00:38.751206 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6cdf9fdb77-bk2d7" podUID="dc5ba6bb-b92b-4490-90f8-193e957a2beb" May 27 04:00:39.393309 containerd[1570]: time="2025-05-27T04:00:39.393237751Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9e73ba597a7d1bf517c7e6779b497bc7933bb786763613c9a7908b27f2aa4613\" id:\"68ab956bedcf340de775bc9ea33b2657ab4be663896d8161bea6de800383ac93\" pid:6207 exited_at:{seconds:1748318439 nanos:393030319}" May 27 04:00:39.450270 containerd[1570]: time="2025-05-27T04:00:39.450227709Z" level=warning msg="container event discarded" container=9db7031b476baad853ca5a3d6cfc91553ef63f70521737160612ff26e21fe8be type=CONTAINER_CREATED_EVENT May 27 04:00:39.450270 containerd[1570]: time="2025-05-27T04:00:39.450262930Z" level=warning msg="container event discarded" container=9db7031b476baad853ca5a3d6cfc91553ef63f70521737160612ff26e21fe8be type=CONTAINER_STARTED_EVENT May 27 04:00:39.661456 containerd[1570]: time="2025-05-27T04:00:39.661339040Z" level=warning msg="container event discarded" container=801453188ceb7cda0b11bd7a5fb36095b68979c28145bb12a8a10739eb842879 type=CONTAINER_CREATED_EVENT May 27 04:00:39.661456 containerd[1570]: time="2025-05-27T04:00:39.661365810Z" level=warning msg="container event discarded" container=801453188ceb7cda0b11bd7a5fb36095b68979c28145bb12a8a10739eb842879 type=CONTAINER_STARTED_EVENT May 27 04:00:39.858399 systemd[1]: Started sshd@44-172.237.145.45:22-139.178.68.195:37986.service - OpenSSH per-connection server daemon (139.178.68.195:37986). May 27 04:00:40.199297 sshd[6220]: Accepted publickey for core from 139.178.68.195 port 37986 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 04:00:40.200630 sshd-session[6220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:00:40.206277 systemd-logind[1546]: New session 45 of user core. May 27 04:00:40.211307 systemd[1]: Started session-45.scope - Session 45 of User core. May 27 04:00:40.506557 sshd[6222]: Connection closed by 139.178.68.195 port 37986 May 27 04:00:40.507310 sshd-session[6220]: pam_unix(sshd:session): session closed for user core May 27 04:00:40.510799 systemd[1]: sshd@44-172.237.145.45:22-139.178.68.195:37986.service: Deactivated successfully. May 27 04:00:40.512872 systemd[1]: session-45.scope: Deactivated successfully. May 27 04:00:40.516132 systemd-logind[1546]: Session 45 logged out. Waiting for processes to exit. May 27 04:00:40.517821 systemd-logind[1546]: Removed session 45. May 27 04:00:40.648478 containerd[1570]: time="2025-05-27T04:00:40.648413150Z" level=warning msg="container event discarded" container=81d4a66b96042094bf6d36abda88bbb6bde425b9a05c5c256c6d92ccc19fbf68 type=CONTAINER_CREATED_EVENT May 27 04:00:40.727829 containerd[1570]: time="2025-05-27T04:00:40.727782234Z" level=warning msg="container event discarded" container=81d4a66b96042094bf6d36abda88bbb6bde425b9a05c5c256c6d92ccc19fbf68 type=CONTAINER_STARTED_EVENT May 27 04:00:41.349066 containerd[1570]: time="2025-05-27T04:00:41.348993391Z" level=warning msg="container event discarded" container=681d98dbdce6fa4261c682f67f6df4bdff1b37d5aab4d83a58fa171144ad924b type=CONTAINER_CREATED_EVENT May 27 04:00:41.420431 containerd[1570]: time="2025-05-27T04:00:41.420375867Z" level=warning msg="container event discarded" container=681d98dbdce6fa4261c682f67f6df4bdff1b37d5aab4d83a58fa171144ad924b type=CONTAINER_STARTED_EVENT May 27 04:00:41.529208 containerd[1570]: time="2025-05-27T04:00:41.529157126Z" level=warning msg="container event discarded" container=681d98dbdce6fa4261c682f67f6df4bdff1b37d5aab4d83a58fa171144ad924b type=CONTAINER_STOPPED_EVENT May 27 04:00:41.830681 update_engine[1549]: I20250527 04:00:41.830606 1549 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 27 04:00:41.830681 update_engine[1549]: I20250527 04:00:41.830669 1549 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 27 04:00:41.831145 update_engine[1549]: I20250527 04:00:41.831014 1549 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 27 04:00:41.832366 update_engine[1549]: I20250527 04:00:41.832315 1549 omaha_request_params.cc:62] Current group set to alpha May 27 04:00:41.832913 update_engine[1549]: I20250527 04:00:41.832880 1549 update_attempter.cc:499] Already updated boot flags. Skipping. May 27 04:00:41.833345 update_engine[1549]: I20250527 04:00:41.833037 1549 update_attempter.cc:643] Scheduling an action processor start. May 27 04:00:41.833345 update_engine[1549]: I20250527 04:00:41.833066 1549 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 27 04:00:41.833345 update_engine[1549]: I20250527 04:00:41.833093 1549 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 27 04:00:41.833345 update_engine[1549]: I20250527 04:00:41.833153 1549 omaha_request_action.cc:271] Posting an Omaha request to disabled May 27 04:00:41.833345 update_engine[1549]: I20250527 04:00:41.833160 1549 omaha_request_action.cc:272] Request: May 27 04:00:41.833345 update_engine[1549]: May 27 04:00:41.833345 update_engine[1549]: May 27 04:00:41.833345 update_engine[1549]: May 27 04:00:41.833345 update_engine[1549]: May 27 04:00:41.833345 update_engine[1549]: May 27 04:00:41.833345 update_engine[1549]: May 27 04:00:41.833345 update_engine[1549]: May 27 04:00:41.833345 update_engine[1549]: May 27 04:00:41.833345 update_engine[1549]: I20250527 04:00:41.833168 1549 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 27 04:00:41.836798 locksmithd[1588]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 27 04:00:41.839114 update_engine[1549]: I20250527 04:00:41.839076 1549 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 27 04:00:41.839517 update_engine[1549]: I20250527 04:00:41.839481 1549 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 27 04:00:41.948425 update_engine[1549]: E20250527 04:00:41.948369 1549 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 27 04:00:41.948555 update_engine[1549]: I20250527 04:00:41.948443 1549 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 27 04:00:43.456656 containerd[1570]: time="2025-05-27T04:00:43.456603742Z" level=warning msg="container event discarded" container=ad3171b435f6e4a0816c5ecdb43e8dbcf5b548c19ca5425d378e2c0f63ec6706 type=CONTAINER_CREATED_EVENT May 27 04:00:43.518397 containerd[1570]: time="2025-05-27T04:00:43.518355868Z" level=warning msg="container event discarded" container=ad3171b435f6e4a0816c5ecdb43e8dbcf5b548c19ca5425d378e2c0f63ec6706 type=CONTAINER_STARTED_EVENT May 27 04:00:44.079375 containerd[1570]: time="2025-05-27T04:00:44.079303200Z" level=warning msg="container event discarded" container=ad3171b435f6e4a0816c5ecdb43e8dbcf5b548c19ca5425d378e2c0f63ec6706 type=CONTAINER_STOPPED_EVENT May 27 04:00:44.749580 kubelet[2692]: E0527 04:00:44.749539 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 04:00:45.567104 systemd[1]: Started sshd@45-172.237.145.45:22-139.178.68.195:39744.service - OpenSSH per-connection server daemon (139.178.68.195:39744). May 27 04:00:45.750570 kubelet[2692]: E0527 04:00:45.750491 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-n8xhd" podUID="305d83db-a282-47d8-aae7-29f765f39c27" May 27 04:00:45.899796 sshd[6234]: Accepted publickey for core from 139.178.68.195 port 39744 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 04:00:45.901296 sshd-session[6234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:00:45.906261 systemd-logind[1546]: New session 46 of user core. May 27 04:00:45.912313 systemd[1]: Started session-46.scope - Session 46 of User core. May 27 04:00:46.193103 sshd[6236]: Connection closed by 139.178.68.195 port 39744 May 27 04:00:46.193965 sshd-session[6234]: pam_unix(sshd:session): session closed for user core May 27 04:00:46.198335 systemd[1]: sshd@45-172.237.145.45:22-139.178.68.195:39744.service: Deactivated successfully. May 27 04:00:46.200792 systemd[1]: session-46.scope: Deactivated successfully. May 27 04:00:46.202370 systemd-logind[1546]: Session 46 logged out. Waiting for processes to exit. May 27 04:00:46.204622 systemd-logind[1546]: Removed session 46. May 27 04:00:48.271785 containerd[1570]: time="2025-05-27T04:00:48.271704094Z" level=warning msg="container event discarded" container=9e73ba597a7d1bf517c7e6779b497bc7933bb786763613c9a7908b27f2aa4613 type=CONTAINER_CREATED_EVENT May 27 04:00:48.388065 containerd[1570]: time="2025-05-27T04:00:48.387999218Z" level=warning msg="container event discarded" container=9e73ba597a7d1bf517c7e6779b497bc7933bb786763613c9a7908b27f2aa4613 type=CONTAINER_STARTED_EVENT May 27 04:00:49.503602 containerd[1570]: time="2025-05-27T04:00:49.503514662Z" level=warning msg="container event discarded" container=03cb00ddada016c553952d01809ad5253784d16cb8b0ada33210ee4fd7ea883c type=CONTAINER_CREATED_EVENT May 27 04:00:49.503602 containerd[1570]: time="2025-05-27T04:00:49.503564393Z" level=warning msg="container event discarded" container=03cb00ddada016c553952d01809ad5253784d16cb8b0ada33210ee4fd7ea883c type=CONTAINER_STARTED_EVENT May 27 04:00:49.754885 kubelet[2692]: E0527 04:00:49.754747 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6cdf9fdb77-bk2d7" podUID="dc5ba6bb-b92b-4490-90f8-193e957a2beb" May 27 04:00:51.262255 systemd[1]: Started sshd@46-172.237.145.45:22-139.178.68.195:39758.service - OpenSSH per-connection server daemon (139.178.68.195:39758). May 27 04:00:51.602941 sshd[6248]: Accepted publickey for core from 139.178.68.195 port 39758 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 04:00:51.604297 sshd-session[6248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:00:51.609246 systemd-logind[1546]: New session 47 of user core. May 27 04:00:51.616304 systemd[1]: Started session-47.scope - Session 47 of User core. May 27 04:00:51.752444 kubelet[2692]: E0527 04:00:51.752405 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 04:00:51.753422 kubelet[2692]: E0527 04:00:51.753365 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 04:00:51.829385 update_engine[1549]: I20250527 04:00:51.829317 1549 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 27 04:00:51.829757 update_engine[1549]: I20250527 04:00:51.829623 1549 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 27 04:00:51.830385 update_engine[1549]: I20250527 04:00:51.830354 1549 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 27 04:00:51.830979 update_engine[1549]: E20250527 04:00:51.830950 1549 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 27 04:00:51.831012 update_engine[1549]: I20250527 04:00:51.831001 1549 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 27 04:00:51.918748 sshd[6250]: Connection closed by 139.178.68.195 port 39758 May 27 04:00:51.919656 sshd-session[6248]: pam_unix(sshd:session): session closed for user core May 27 04:00:51.923566 systemd[1]: sshd@46-172.237.145.45:22-139.178.68.195:39758.service: Deactivated successfully. May 27 04:00:51.925960 systemd[1]: session-47.scope: Deactivated successfully. May 27 04:00:51.926924 systemd-logind[1546]: Session 47 logged out. Waiting for processes to exit. May 27 04:00:51.929519 systemd-logind[1546]: Removed session 47. May 27 04:00:52.749966 kubelet[2692]: E0527 04:00:52.749933 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" May 27 04:00:53.311480 containerd[1570]: time="2025-05-27T04:00:53.311427841Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3655c033461662029d67b2b9a60a72ea8718e993779084ac26b54a5243af8b96\" id:\"5ac531f3c0a9a5897de0e2040d1c1e4c116ca0def95c14baa953ed12ac934248\" pid:6272 exited_at:{seconds:1748318453 nanos:311238059}" May 27 04:00:55.969775 containerd[1570]: time="2025-05-27T04:00:55.969689556Z" level=warning msg="container event discarded" container=83cb036ecc3be99f8689f5f9ee05e796a3e52e844bc3c37917cd1887f33541d1 type=CONTAINER_CREATED_EVENT May 27 04:00:55.969775 containerd[1570]: time="2025-05-27T04:00:55.969749967Z" level=warning msg="container event discarded" container=83cb036ecc3be99f8689f5f9ee05e796a3e52e844bc3c37917cd1887f33541d1 type=CONTAINER_STARTED_EVENT May 27 04:00:56.977046 systemd[1]: Started sshd@47-172.237.145.45:22-139.178.68.195:54018.service - OpenSSH per-connection server daemon (139.178.68.195:54018). May 27 04:00:57.311560 sshd[6283]: Accepted publickey for core from 139.178.68.195 port 54018 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 04:00:57.312944 sshd-session[6283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:00:57.317865 systemd-logind[1546]: New session 48 of user core. May 27 04:00:57.323318 systemd[1]: Started session-48.scope - Session 48 of User core. May 27 04:00:57.623521 sshd[6285]: Connection closed by 139.178.68.195 port 54018 May 27 04:00:57.624135 sshd-session[6283]: pam_unix(sshd:session): session closed for user core May 27 04:00:57.628942 systemd[1]: sshd@47-172.237.145.45:22-139.178.68.195:54018.service: Deactivated successfully. May 27 04:00:57.632090 systemd[1]: session-48.scope: Deactivated successfully. May 27 04:00:57.633588 systemd-logind[1546]: Session 48 logged out. Waiting for processes to exit. May 27 04:00:57.635447 systemd-logind[1546]: Removed session 48. May 27 04:00:57.751068 kubelet[2692]: E0527 04:00:57.751019 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-n8xhd" podUID="305d83db-a282-47d8-aae7-29f765f39c27" May 27 04:00:58.051318 containerd[1570]: time="2025-05-27T04:00:58.051252537Z" level=warning msg="container event discarded" container=32397e2894ba6381cfaae4387210941324e44294fb71d0a4511cb6b95b4073c0 type=CONTAINER_CREATED_EVENT May 27 04:00:58.051318 containerd[1570]: time="2025-05-27T04:00:58.051301137Z" level=warning msg="container event discarded" container=32397e2894ba6381cfaae4387210941324e44294fb71d0a4511cb6b95b4073c0 type=CONTAINER_STARTED_EVENT May 27 04:00:58.077079 containerd[1570]: time="2025-05-27T04:00:58.077037669Z" level=warning msg="container event discarded" container=3e59d3ba03ed3911e18f07820950b9e43745de25141838ec823a28fbb13ff800 type=CONTAINER_CREATED_EVENT May 27 04:00:58.140553 containerd[1570]: time="2025-05-27T04:00:58.140522789Z" level=warning msg="container event discarded" container=3e59d3ba03ed3911e18f07820950b9e43745de25141838ec823a28fbb13ff800 type=CONTAINER_STARTED_EVENT May 27 04:00:59.124072 containerd[1570]: time="2025-05-27T04:00:59.124000444Z" level=warning msg="container event discarded" container=71bd5531f51409429de110927b81d40d1301c9bdbcecc1c39df63bf0cee7d64e type=CONTAINER_CREATED_EVENT May 27 04:00:59.124072 containerd[1570]: time="2025-05-27T04:00:59.124045444Z" level=warning msg="container event discarded" container=71bd5531f51409429de110927b81d40d1301c9bdbcecc1c39df63bf0cee7d64e type=CONTAINER_STARTED_EVENT May 27 04:00:59.745967 containerd[1570]: time="2025-05-27T04:00:59.745870488Z" level=warning msg="container event discarded" container=ff71f716c2da52cd4dd1d80cfe6aafeb359479b6e4d86eb7acd87fabc2ff8420 type=CONTAINER_CREATED_EVENT May 27 04:00:59.909310 containerd[1570]: time="2025-05-27T04:00:59.909226986Z" level=warning msg="container event discarded" container=ff71f716c2da52cd4dd1d80cfe6aafeb359479b6e4d86eb7acd87fabc2ff8420 type=CONTAINER_STARTED_EVENT May 27 04:01:00.391466 containerd[1570]: time="2025-05-27T04:01:00.391390991Z" level=warning msg="container event discarded" container=c80368a80f539b3fa2d32ca2bb93f05d28768aa1c78996119d34e06aa505f838 type=CONTAINER_CREATED_EVENT May 27 04:01:00.391466 containerd[1570]: time="2025-05-27T04:01:00.391434642Z" level=warning msg="container event discarded" container=c80368a80f539b3fa2d32ca2bb93f05d28768aa1c78996119d34e06aa505f838 type=CONTAINER_STARTED_EVENT May 27 04:01:00.407039 containerd[1570]: time="2025-05-27T04:01:00.407004595Z" level=warning msg="container event discarded" container=0864c4bacfeb51698d0dc90423081ebcc982736ec3da48f008da4e68b44f63d9 type=CONTAINER_CREATED_EVENT May 27 04:01:00.407039 containerd[1570]: time="2025-05-27T04:01:00.407031885Z" level=warning msg="container event discarded" container=0864c4bacfeb51698d0dc90423081ebcc982736ec3da48f008da4e68b44f63d9 type=CONTAINER_STARTED_EVENT May 27 04:01:00.425465 containerd[1570]: time="2025-05-27T04:01:00.425392498Z" level=warning msg="container event discarded" container=e55c5eacf43e1355c6d255c48a9463be33550a0b3c0342e1eb6dc041c0746d60 type=CONTAINER_CREATED_EVENT May 27 04:01:00.466665 containerd[1570]: time="2025-05-27T04:01:00.466597729Z" level=warning msg="container event discarded" container=5023444947959b5de5dd0ca0ae99264be085bf32308f720868577a8196e75577 type=CONTAINER_CREATED_EVENT May 27 04:01:00.590235 containerd[1570]: time="2025-05-27T04:01:00.590159373Z" level=warning msg="container event discarded" container=5023444947959b5de5dd0ca0ae99264be085bf32308f720868577a8196e75577 type=CONTAINER_STARTED_EVENT May 27 04:01:00.731608 containerd[1570]: time="2025-05-27T04:01:00.731487644Z" level=warning msg="container event discarded" container=4d0e5db8fb92fb8dd388897bf0d71edce41b962351c39806e73dcfee715e3bf7 type=CONTAINER_CREATED_EVENT May 27 04:01:00.731608 containerd[1570]: time="2025-05-27T04:01:00.731527184Z" level=warning msg="container event discarded" container=4d0e5db8fb92fb8dd388897bf0d71edce41b962351c39806e73dcfee715e3bf7 type=CONTAINER_STARTED_EVENT May 27 04:01:00.743751 containerd[1570]: time="2025-05-27T04:01:00.743721332Z" level=warning msg="container event discarded" container=cf43563f2ab3bb932c7a0801f40bbd94437e76855994ef2e9862dfb5231a1059 type=CONTAINER_CREATED_EVENT May 27 04:01:00.743751 containerd[1570]: time="2025-05-27T04:01:00.743746702Z" level=warning msg="container event discarded" container=cf43563f2ab3bb932c7a0801f40bbd94437e76855994ef2e9862dfb5231a1059 type=CONTAINER_STARTED_EVENT May 27 04:01:00.743843 containerd[1570]: time="2025-05-27T04:01:00.743759412Z" level=warning msg="container event discarded" container=e55c5eacf43e1355c6d255c48a9463be33550a0b3c0342e1eb6dc041c0746d60 type=CONTAINER_STARTED_EVENT May 27 04:01:01.597063 containerd[1570]: time="2025-05-27T04:01:01.596972019Z" level=warning msg="container event discarded" container=94441885d9e322548cfd6b40da64fac2e629a5c79cc7d29ab6cc23cc23767867 type=CONTAINER_CREATED_EVENT May 27 04:01:01.777758 containerd[1570]: time="2025-05-27T04:01:01.777707662Z" level=warning msg="container event discarded" container=94441885d9e322548cfd6b40da64fac2e629a5c79cc7d29ab6cc23cc23767867 type=CONTAINER_STARTED_EVENT May 27 04:01:01.833495 update_engine[1549]: I20250527 04:01:01.833436 1549 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 27 04:01:01.833882 update_engine[1549]: I20250527 04:01:01.833710 1549 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 27 04:01:01.833997 update_engine[1549]: I20250527 04:01:01.833961 1549 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 27 04:01:01.834645 update_engine[1549]: E20250527 04:01:01.834591 1549 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 27 04:01:01.834750 update_engine[1549]: I20250527 04:01:01.834667 1549 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 27 04:01:02.688613 systemd[1]: Started sshd@48-172.237.145.45:22-139.178.68.195:54026.service - OpenSSH per-connection server daemon (139.178.68.195:54026). May 27 04:01:02.751572 kubelet[2692]: E0527 04:01:02.751510 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6cdf9fdb77-bk2d7" podUID="dc5ba6bb-b92b-4490-90f8-193e957a2beb" May 27 04:01:03.035041 sshd[6298]: Accepted publickey for core from 139.178.68.195 port 54026 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 04:01:03.037019 sshd-session[6298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:01:03.042259 systemd-logind[1546]: New session 49 of user core. May 27 04:01:03.050309 systemd[1]: Started session-49.scope - Session 49 of User core. May 27 04:01:03.260323 containerd[1570]: time="2025-05-27T04:01:03.260208344Z" level=warning msg="container event discarded" container=3655c033461662029d67b2b9a60a72ea8718e993779084ac26b54a5243af8b96 type=CONTAINER_CREATED_EVENT May 27 04:01:03.348247 sshd[6300]: Connection closed by 139.178.68.195 port 54026 May 27 04:01:03.349096 sshd-session[6298]: pam_unix(sshd:session): session closed for user core May 27 04:01:03.353216 systemd-logind[1546]: Session 49 logged out. Waiting for processes to exit. May 27 04:01:03.353983 systemd[1]: sshd@48-172.237.145.45:22-139.178.68.195:54026.service: Deactivated successfully. May 27 04:01:03.354650 containerd[1570]: time="2025-05-27T04:01:03.354544928Z" level=warning msg="container event discarded" container=3655c033461662029d67b2b9a60a72ea8718e993779084ac26b54a5243af8b96 type=CONTAINER_STARTED_EVENT May 27 04:01:03.357615 systemd[1]: session-49.scope: Deactivated successfully. May 27 04:01:03.360096 systemd-logind[1546]: Removed session 49. May 27 04:01:03.970555 containerd[1570]: time="2025-05-27T04:01:03.970509660Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3655c033461662029d67b2b9a60a72ea8718e993779084ac26b54a5243af8b96\" id:\"ae3fafc83485de211165f12fe4370d9d7009e433e33fa2c50dbadf90d3c14dfe\" pid:6322 exited_at:{seconds:1748318463 nanos:970219066}" May 27 04:01:04.585108 containerd[1570]: time="2025-05-27T04:01:04.585017838Z" level=warning msg="container event discarded" container=a63ac54f05afbf06a7d5905fa7ae7dcd859595392b89651f4c4c1499676b832f type=CONTAINER_CREATED_EVENT May 27 04:01:04.660942 containerd[1570]: time="2025-05-27T04:01:04.660867229Z" level=warning msg="container event discarded" container=a63ac54f05afbf06a7d5905fa7ae7dcd859595392b89651f4c4c1499676b832f type=CONTAINER_STARTED_EVENT May 27 04:01:08.411002 systemd[1]: Started sshd@49-172.237.145.45:22-139.178.68.195:55148.service - OpenSSH per-connection server daemon (139.178.68.195:55148). May 27 04:01:08.752139 sshd[6332]: Accepted publickey for core from 139.178.68.195 port 55148 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 04:01:08.753667 sshd-session[6332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:01:08.758575 systemd-logind[1546]: New session 50 of user core. May 27 04:01:08.766329 systemd[1]: Started session-50.scope - Session 50 of User core. May 27 04:01:09.065552 sshd[6334]: Connection closed by 139.178.68.195 port 55148 May 27 04:01:09.066414 sshd-session[6332]: pam_unix(sshd:session): session closed for user core May 27 04:01:09.070725 systemd[1]: sshd@49-172.237.145.45:22-139.178.68.195:55148.service: Deactivated successfully. May 27 04:01:09.073468 systemd[1]: session-50.scope: Deactivated successfully. May 27 04:01:09.076627 systemd-logind[1546]: Session 50 logged out. Waiting for processes to exit. May 27 04:01:09.078415 systemd-logind[1546]: Removed session 50. May 27 04:01:09.396995 containerd[1570]: time="2025-05-27T04:01:09.396876177Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9e73ba597a7d1bf517c7e6779b497bc7933bb786763613c9a7908b27f2aa4613\" id:\"3f3c331d823845a7181cfb202fa0ce4afc1c8a050e6e35211943de9bfe2332c7\" pid:6356 exited_at:{seconds:1748318469 nanos:396608825}" May 27 04:01:11.831422 update_engine[1549]: I20250527 04:01:11.831357 1549 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 27 04:01:11.831791 update_engine[1549]: I20250527 04:01:11.831619 1549 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 27 04:01:11.831873 update_engine[1549]: I20250527 04:01:11.831846 1549 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 27 04:01:11.832920 update_engine[1549]: E20250527 04:01:11.832854 1549 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 27 04:01:11.832973 update_engine[1549]: I20250527 04:01:11.832947 1549 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 27 04:01:11.832973 update_engine[1549]: I20250527 04:01:11.832957 1549 omaha_request_action.cc:617] Omaha request response: May 27 04:01:11.833085 update_engine[1549]: E20250527 04:01:11.833057 1549 omaha_request_action.cc:636] Omaha request network transfer failed. May 27 04:01:11.833117 update_engine[1549]: I20250527 04:01:11.833083 1549 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 27 04:01:11.833117 update_engine[1549]: I20250527 04:01:11.833091 1549 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 27 04:01:11.833117 update_engine[1549]: I20250527 04:01:11.833096 1549 update_attempter.cc:306] Processing Done. May 27 04:01:11.833117 update_engine[1549]: E20250527 04:01:11.833110 1549 update_attempter.cc:619] Update failed. May 27 04:01:11.833117 update_engine[1549]: I20250527 04:01:11.833117 1549 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 27 04:01:11.833246 update_engine[1549]: I20250527 04:01:11.833122 1549 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 27 04:01:11.833246 update_engine[1549]: I20250527 04:01:11.833129 1549 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 27 04:01:11.833291 update_engine[1549]: I20250527 04:01:11.833243 1549 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 27 04:01:11.833495 update_engine[1549]: I20250527 04:01:11.833365 1549 omaha_request_action.cc:271] Posting an Omaha request to disabled May 27 04:01:11.833495 update_engine[1549]: I20250527 04:01:11.833384 1549 omaha_request_action.cc:272] Request: May 27 04:01:11.833495 update_engine[1549]: May 27 04:01:11.833495 update_engine[1549]: May 27 04:01:11.833495 update_engine[1549]: May 27 04:01:11.833495 update_engine[1549]: May 27 04:01:11.833495 update_engine[1549]: May 27 04:01:11.833495 update_engine[1549]: May 27 04:01:11.833495 update_engine[1549]: I20250527 04:01:11.833391 1549 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 27 04:01:11.833767 locksmithd[1588]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 27 04:01:11.834010 update_engine[1549]: I20250527 04:01:11.833786 1549 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 27 04:01:11.834100 update_engine[1549]: I20250527 04:01:11.834077 1549 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 27 04:01:11.834619 update_engine[1549]: E20250527 04:01:11.834585 1549 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 27 04:01:11.834751 update_engine[1549]: I20250527 04:01:11.834718 1549 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 27 04:01:11.834751 update_engine[1549]: I20250527 04:01:11.834738 1549 omaha_request_action.cc:617] Omaha request response: May 27 04:01:11.834810 update_engine[1549]: I20250527 04:01:11.834751 1549 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 27 04:01:11.834810 update_engine[1549]: I20250527 04:01:11.834757 1549 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 27 04:01:11.834810 update_engine[1549]: I20250527 04:01:11.834762 1549 update_attempter.cc:306] Processing Done. May 27 04:01:11.834810 update_engine[1549]: I20250527 04:01:11.834769 1549 update_attempter.cc:310] Error event sent. May 27 04:01:11.834810 update_engine[1549]: I20250527 04:01:11.834776 1549 update_check_scheduler.cc:74] Next update check in 40m18s May 27 04:01:11.835171 locksmithd[1588]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 27 04:01:12.750861 kubelet[2692]: E0527 04:01:12.750789 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-n8xhd" podUID="305d83db-a282-47d8-aae7-29f765f39c27" May 27 04:01:14.123333 systemd[1]: Started sshd@50-172.237.145.45:22-139.178.68.195:46490.service - OpenSSH per-connection server daemon (139.178.68.195:46490). May 27 04:01:14.450299 sshd[6368]: Accepted publickey for core from 139.178.68.195 port 46490 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 04:01:14.451597 sshd-session[6368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:01:14.456242 systemd-logind[1546]: New session 51 of user core. May 27 04:01:14.460304 systemd[1]: Started session-51.scope - Session 51 of User core. May 27 04:01:14.742319 sshd[6370]: Connection closed by 139.178.68.195 port 46490 May 27 04:01:14.742836 sshd-session[6368]: pam_unix(sshd:session): session closed for user core May 27 04:01:14.749280 systemd[1]: sshd@50-172.237.145.45:22-139.178.68.195:46490.service: Deactivated successfully. May 27 04:01:14.752548 systemd[1]: session-51.scope: Deactivated successfully. May 27 04:01:14.753724 systemd-logind[1546]: Session 51 logged out. Waiting for processes to exit. May 27 04:01:14.755552 systemd-logind[1546]: Removed session 51. May 27 04:01:17.752928 kubelet[2692]: E0527 04:01:17.752887 2692 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-6cdf9fdb77-bk2d7" podUID="dc5ba6bb-b92b-4490-90f8-193e957a2beb" May 27 04:01:19.806074 systemd[1]: Started sshd@51-172.237.145.45:22-139.178.68.195:46496.service - OpenSSH per-connection server daemon (139.178.68.195:46496). May 27 04:01:20.148542 sshd[6389]: Accepted publickey for core from 139.178.68.195 port 46496 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 04:01:20.150045 sshd-session[6389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:01:20.154242 systemd-logind[1546]: New session 52 of user core. May 27 04:01:20.159328 systemd[1]: Started session-52.scope - Session 52 of User core. May 27 04:01:20.451950 sshd[6391]: Connection closed by 139.178.68.195 port 46496 May 27 04:01:20.452509 sshd-session[6389]: pam_unix(sshd:session): session closed for user core May 27 04:01:20.457153 systemd[1]: sshd@51-172.237.145.45:22-139.178.68.195:46496.service: Deactivated successfully. May 27 04:01:20.459150 systemd[1]: session-52.scope: Deactivated successfully. May 27 04:01:20.459964 systemd-logind[1546]: Session 52 logged out. Waiting for processes to exit. May 27 04:01:20.461622 systemd-logind[1546]: Removed session 52.