May 27 18:16:10.865033 kernel: Linux version 6.12.30-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 27 15:32:02 -00 2025 May 27 18:16:10.865057 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=daa3e2d55cc4a7ff0ec15aa9bb0c07df9999cb4e3041f3adad1b1101efdea101 May 27 18:16:10.865065 kernel: BIOS-provided physical RAM map: May 27 18:16:10.865074 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable May 27 18:16:10.865079 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved May 27 18:16:10.865085 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 27 18:16:10.865091 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable May 27 18:16:10.865097 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved May 27 18:16:10.865102 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 27 18:16:10.865108 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 27 18:16:10.865114 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 27 18:16:10.865119 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 27 18:16:10.865127 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable May 27 18:16:10.865133 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 27 18:16:10.865139 kernel: NX (Execute Disable) protection: active May 27 18:16:10.865145 kernel: APIC: Static calls initialized May 27 18:16:10.865151 kernel: SMBIOS 2.8 present. May 27 18:16:10.865159 kernel: DMI: Linode Compute Instance, BIOS Not Specified May 27 18:16:10.865184 kernel: DMI: Memory slots populated: 1/1 May 27 18:16:10.865190 kernel: Hypervisor detected: KVM May 27 18:16:10.865195 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 27 18:16:10.865201 kernel: kvm-clock: using sched offset of 5628571809 cycles May 27 18:16:10.865207 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 27 18:16:10.865214 kernel: tsc: Detected 1999.999 MHz processor May 27 18:16:10.865220 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 27 18:16:10.865227 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 27 18:16:10.865233 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 May 27 18:16:10.865242 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 27 18:16:10.865249 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 27 18:16:10.865255 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 May 27 18:16:10.865261 kernel: Using GB pages for direct mapping May 27 18:16:10.865267 kernel: ACPI: Early table checksum verification disabled May 27 18:16:10.865273 kernel: ACPI: RSDP 0x00000000000F51B0 000014 (v00 BOCHS ) May 27 18:16:10.865279 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 18:16:10.865285 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 27 18:16:10.865291 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 18:16:10.865300 kernel: ACPI: FACS 0x000000007FFE0000 000040 May 27 18:16:10.865306 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 18:16:10.865312 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 18:16:10.865318 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 18:16:10.865327 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 18:16:10.865334 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] May 27 18:16:10.865342 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] May 27 18:16:10.865349 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] May 27 18:16:10.865355 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] May 27 18:16:10.865362 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] May 27 18:16:10.865368 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] May 27 18:16:10.865374 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] May 27 18:16:10.865380 kernel: No NUMA configuration found May 27 18:16:10.865387 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] May 27 18:16:10.865395 kernel: NODE_DATA(0) allocated [mem 0x17fff6dc0-0x17fffdfff] May 27 18:16:10.865402 kernel: Zone ranges: May 27 18:16:10.865408 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 27 18:16:10.865414 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 27 18:16:10.865421 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] May 27 18:16:10.865427 kernel: Device empty May 27 18:16:10.865433 kernel: Movable zone start for each node May 27 18:16:10.865440 kernel: Early memory node ranges May 27 18:16:10.865446 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 27 18:16:10.865452 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] May 27 18:16:10.865460 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] May 27 18:16:10.865467 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] May 27 18:16:10.865473 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 27 18:16:10.865479 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 27 18:16:10.865486 kernel: On node 0, zone Normal: 35 pages in unavailable ranges May 27 18:16:10.865492 kernel: ACPI: PM-Timer IO Port: 0x608 May 27 18:16:10.865498 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 27 18:16:10.865505 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 27 18:16:10.865511 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 27 18:16:10.865520 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 27 18:16:10.865526 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 27 18:16:10.865532 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 27 18:16:10.865539 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 27 18:16:10.865545 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 27 18:16:10.865552 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 27 18:16:10.865558 kernel: TSC deadline timer available May 27 18:16:10.865564 kernel: CPU topo: Max. logical packages: 1 May 27 18:16:10.865570 kernel: CPU topo: Max. logical dies: 1 May 27 18:16:10.865579 kernel: CPU topo: Max. dies per package: 1 May 27 18:16:10.865585 kernel: CPU topo: Max. threads per core: 1 May 27 18:16:10.865592 kernel: CPU topo: Num. cores per package: 2 May 27 18:16:10.865598 kernel: CPU topo: Num. threads per package: 2 May 27 18:16:10.865604 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs May 27 18:16:10.865610 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 27 18:16:10.865617 kernel: kvm-guest: KVM setup pv remote TLB flush May 27 18:16:10.865623 kernel: kvm-guest: setup PV sched yield May 27 18:16:10.865629 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 27 18:16:10.865638 kernel: Booting paravirtualized kernel on KVM May 27 18:16:10.865644 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 27 18:16:10.865650 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 27 18:16:10.865657 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 May 27 18:16:10.865663 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 May 27 18:16:10.865669 kernel: pcpu-alloc: [0] 0 1 May 27 18:16:10.865675 kernel: kvm-guest: PV spinlocks enabled May 27 18:16:10.865682 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 27 18:16:10.865689 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=daa3e2d55cc4a7ff0ec15aa9bb0c07df9999cb4e3041f3adad1b1101efdea101 May 27 18:16:10.865698 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 27 18:16:10.865705 kernel: random: crng init done May 27 18:16:10.865711 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 27 18:16:10.865717 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 27 18:16:10.865724 kernel: Fallback order for Node 0: 0 May 27 18:16:10.865730 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 May 27 18:16:10.865736 kernel: Policy zone: Normal May 27 18:16:10.865743 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 27 18:16:10.865751 kernel: software IO TLB: area num 2. May 27 18:16:10.865758 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 27 18:16:10.865764 kernel: ftrace: allocating 40081 entries in 157 pages May 27 18:16:10.865770 kernel: ftrace: allocated 157 pages with 5 groups May 27 18:16:10.865776 kernel: Dynamic Preempt: voluntary May 27 18:16:10.865783 kernel: rcu: Preemptible hierarchical RCU implementation. May 27 18:16:10.865790 kernel: rcu: RCU event tracing is enabled. May 27 18:16:10.865797 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 27 18:16:10.865803 kernel: Trampoline variant of Tasks RCU enabled. May 27 18:16:10.865812 kernel: Rude variant of Tasks RCU enabled. May 27 18:16:10.865818 kernel: Tracing variant of Tasks RCU enabled. May 27 18:16:10.865825 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 27 18:16:10.865831 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 27 18:16:10.865838 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 18:16:10.865851 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 18:16:10.865861 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 18:16:10.865867 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 27 18:16:10.865874 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 27 18:16:10.865880 kernel: Console: colour VGA+ 80x25 May 27 18:16:10.865887 kernel: printk: legacy console [tty0] enabled May 27 18:16:10.865894 kernel: printk: legacy console [ttyS0] enabled May 27 18:16:10.865903 kernel: ACPI: Core revision 20240827 May 27 18:16:10.865910 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 27 18:16:10.865916 kernel: APIC: Switch to symmetric I/O mode setup May 27 18:16:10.865923 kernel: x2apic enabled May 27 18:16:10.865929 kernel: APIC: Switched APIC routing to: physical x2apic May 27 18:16:10.865938 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 27 18:16:10.865945 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 27 18:16:10.865951 kernel: kvm-guest: setup PV IPIs May 27 18:16:10.865958 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 27 18:16:10.865965 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85afc727, max_idle_ns: 881590685098 ns May 27 18:16:10.865971 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999999) May 27 18:16:10.865978 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 27 18:16:10.865985 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 27 18:16:10.865991 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 27 18:16:10.866000 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 27 18:16:10.866007 kernel: Spectre V2 : Mitigation: Retpolines May 27 18:16:10.866014 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 27 18:16:10.866020 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls May 27 18:16:10.866027 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 27 18:16:10.866034 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 27 18:16:10.866040 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 27 18:16:10.866047 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 27 18:16:10.866056 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 27 18:16:10.866063 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 27 18:16:10.866069 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 27 18:16:10.866076 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 27 18:16:10.866083 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' May 27 18:16:10.866089 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 27 18:16:10.866096 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 May 27 18:16:10.866103 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. May 27 18:16:10.866109 kernel: Freeing SMP alternatives memory: 32K May 27 18:16:10.866118 kernel: pid_max: default: 32768 minimum: 301 May 27 18:16:10.866124 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 27 18:16:10.866131 kernel: landlock: Up and running. May 27 18:16:10.866137 kernel: SELinux: Initializing. May 27 18:16:10.866144 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 18:16:10.866151 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 18:16:10.866158 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) May 27 18:16:10.866190 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 27 18:16:10.866197 kernel: ... version: 0 May 27 18:16:10.866207 kernel: ... bit width: 48 May 27 18:16:10.866213 kernel: ... generic registers: 6 May 27 18:16:10.866220 kernel: ... value mask: 0000ffffffffffff May 27 18:16:10.866226 kernel: ... max period: 00007fffffffffff May 27 18:16:10.866233 kernel: ... fixed-purpose events: 0 May 27 18:16:10.866239 kernel: ... event mask: 000000000000003f May 27 18:16:10.866246 kernel: signal: max sigframe size: 3376 May 27 18:16:10.866253 kernel: rcu: Hierarchical SRCU implementation. May 27 18:16:10.866259 kernel: rcu: Max phase no-delay instances is 400. May 27 18:16:10.866268 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 27 18:16:10.866275 kernel: smp: Bringing up secondary CPUs ... May 27 18:16:10.866281 kernel: smpboot: x86: Booting SMP configuration: May 27 18:16:10.866288 kernel: .... node #0, CPUs: #1 May 27 18:16:10.866294 kernel: smp: Brought up 1 node, 2 CPUs May 27 18:16:10.866301 kernel: smpboot: Total of 2 processors activated (7999.99 BogoMIPS) May 27 18:16:10.866308 kernel: Memory: 3961044K/4193772K available (14336K kernel code, 2430K rwdata, 9952K rodata, 54416K init, 2552K bss, 227300K reserved, 0K cma-reserved) May 27 18:16:10.866314 kernel: devtmpfs: initialized May 27 18:16:10.866321 kernel: x86/mm: Memory block size: 128MB May 27 18:16:10.866330 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 27 18:16:10.866337 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 27 18:16:10.866343 kernel: pinctrl core: initialized pinctrl subsystem May 27 18:16:10.866350 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 27 18:16:10.866356 kernel: audit: initializing netlink subsys (disabled) May 27 18:16:10.866363 kernel: audit: type=2000 audit(1748369768.103:1): state=initialized audit_enabled=0 res=1 May 27 18:16:10.866369 kernel: thermal_sys: Registered thermal governor 'step_wise' May 27 18:16:10.866376 kernel: thermal_sys: Registered thermal governor 'user_space' May 27 18:16:10.866383 kernel: cpuidle: using governor menu May 27 18:16:10.866391 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 27 18:16:10.866398 kernel: dca service started, version 1.12.1 May 27 18:16:10.866405 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] May 27 18:16:10.866420 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 27 18:16:10.866426 kernel: PCI: Using configuration type 1 for base access May 27 18:16:10.866433 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 27 18:16:10.866440 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 27 18:16:10.866446 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 27 18:16:10.866453 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 27 18:16:10.866462 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 27 18:16:10.866469 kernel: ACPI: Added _OSI(Module Device) May 27 18:16:10.866476 kernel: ACPI: Added _OSI(Processor Device) May 27 18:16:10.866482 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 27 18:16:10.866489 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 27 18:16:10.866495 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 27 18:16:10.866502 kernel: ACPI: Interpreter enabled May 27 18:16:10.866508 kernel: ACPI: PM: (supports S0 S3 S5) May 27 18:16:10.866515 kernel: ACPI: Using IOAPIC for interrupt routing May 27 18:16:10.866524 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 27 18:16:10.866530 kernel: PCI: Using E820 reservations for host bridge windows May 27 18:16:10.866537 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 27 18:16:10.866544 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 27 18:16:10.866714 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 27 18:16:10.866844 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 27 18:16:10.866951 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 27 18:16:10.866960 kernel: PCI host bridge to bus 0000:00 May 27 18:16:10.867078 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 27 18:16:10.867218 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 27 18:16:10.867324 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 27 18:16:10.867433 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] May 27 18:16:10.867528 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 27 18:16:10.867641 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] May 27 18:16:10.867743 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 27 18:16:10.867866 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint May 27 18:16:10.867987 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint May 27 18:16:10.868095 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] May 27 18:16:10.868236 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] May 27 18:16:10.868345 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] May 27 18:16:10.868449 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 27 18:16:10.868569 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint May 27 18:16:10.868676 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] May 27 18:16:10.868781 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] May 27 18:16:10.868886 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] May 27 18:16:10.869003 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 27 18:16:10.869108 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] May 27 18:16:10.869431 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] May 27 18:16:10.869551 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] May 27 18:16:10.869660 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] May 27 18:16:10.869776 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint May 27 18:16:10.869881 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 27 18:16:10.869993 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint May 27 18:16:10.870098 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] May 27 18:16:10.870495 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] May 27 18:16:10.870616 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint May 27 18:16:10.870723 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] May 27 18:16:10.870732 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 27 18:16:10.870739 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 27 18:16:10.870746 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 27 18:16:10.870753 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 27 18:16:10.870759 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 27 18:16:10.870770 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 27 18:16:10.870777 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 27 18:16:10.870783 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 27 18:16:10.870790 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 27 18:16:10.870796 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 27 18:16:10.870803 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 27 18:16:10.870810 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 27 18:16:10.870816 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 27 18:16:10.870823 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 27 18:16:10.870832 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 27 18:16:10.870839 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 27 18:16:10.870846 kernel: iommu: Default domain type: Translated May 27 18:16:10.870852 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 27 18:16:10.870859 kernel: PCI: Using ACPI for IRQ routing May 27 18:16:10.870866 kernel: PCI: pci_cache_line_size set to 64 bytes May 27 18:16:10.870872 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] May 27 18:16:10.870879 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] May 27 18:16:10.870983 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 27 18:16:10.871092 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 27 18:16:10.871230 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 27 18:16:10.871241 kernel: vgaarb: loaded May 27 18:16:10.871248 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 27 18:16:10.871255 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 27 18:16:10.871261 kernel: clocksource: Switched to clocksource kvm-clock May 27 18:16:10.871268 kernel: VFS: Disk quotas dquot_6.6.0 May 27 18:16:10.871275 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 27 18:16:10.871285 kernel: pnp: PnP ACPI init May 27 18:16:10.871405 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved May 27 18:16:10.871416 kernel: pnp: PnP ACPI: found 5 devices May 27 18:16:10.871423 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 27 18:16:10.871429 kernel: NET: Registered PF_INET protocol family May 27 18:16:10.871436 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 27 18:16:10.871443 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 27 18:16:10.871449 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 27 18:16:10.871460 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 27 18:16:10.871467 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 27 18:16:10.871473 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 27 18:16:10.871480 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 18:16:10.871486 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 18:16:10.871493 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 27 18:16:10.871500 kernel: NET: Registered PF_XDP protocol family May 27 18:16:10.871596 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 27 18:16:10.871691 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 27 18:16:10.871790 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 27 18:16:10.871885 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] May 27 18:16:10.871979 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 27 18:16:10.872073 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] May 27 18:16:10.872082 kernel: PCI: CLS 0 bytes, default 64 May 27 18:16:10.872089 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 27 18:16:10.872096 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) May 27 18:16:10.872103 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85afc727, max_idle_ns: 881590685098 ns May 27 18:16:10.872113 kernel: Initialise system trusted keyrings May 27 18:16:10.872120 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 27 18:16:10.872126 kernel: Key type asymmetric registered May 27 18:16:10.872133 kernel: Asymmetric key parser 'x509' registered May 27 18:16:10.872139 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 27 18:16:10.872146 kernel: io scheduler mq-deadline registered May 27 18:16:10.872153 kernel: io scheduler kyber registered May 27 18:16:10.872159 kernel: io scheduler bfq registered May 27 18:16:10.872207 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 27 18:16:10.872218 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 27 18:16:10.872225 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 27 18:16:10.872232 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 27 18:16:10.872239 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 27 18:16:10.872245 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 27 18:16:10.872252 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 27 18:16:10.872259 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 27 18:16:10.872265 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 27 18:16:10.872383 kernel: rtc_cmos 00:03: RTC can wake from S4 May 27 18:16:10.872489 kernel: rtc_cmos 00:03: registered as rtc0 May 27 18:16:10.872589 kernel: rtc_cmos 00:03: setting system clock to 2025-05-27T18:16:10 UTC (1748369770) May 27 18:16:10.872693 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 27 18:16:10.872702 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 27 18:16:10.872709 kernel: NET: Registered PF_INET6 protocol family May 27 18:16:10.872716 kernel: Segment Routing with IPv6 May 27 18:16:10.872722 kernel: In-situ OAM (IOAM) with IPv6 May 27 18:16:10.872729 kernel: NET: Registered PF_PACKET protocol family May 27 18:16:10.872738 kernel: Key type dns_resolver registered May 27 18:16:10.872745 kernel: IPI shorthand broadcast: enabled May 27 18:16:10.872752 kernel: sched_clock: Marking stable (2746004651, 221146875)->(2994300706, -27149180) May 27 18:16:10.872758 kernel: registered taskstats version 1 May 27 18:16:10.872765 kernel: Loading compiled-in X.509 certificates May 27 18:16:10.872772 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.30-flatcar: 9507e5c390e18536b38d58c90da64baf0ac9837c' May 27 18:16:10.872778 kernel: Demotion targets for Node 0: null May 27 18:16:10.872785 kernel: Key type .fscrypt registered May 27 18:16:10.872792 kernel: Key type fscrypt-provisioning registered May 27 18:16:10.872800 kernel: ima: No TPM chip found, activating TPM-bypass! May 27 18:16:10.872807 kernel: ima: Allocated hash algorithm: sha1 May 27 18:16:10.872814 kernel: ima: No architecture policies found May 27 18:16:10.872820 kernel: clk: Disabling unused clocks May 27 18:16:10.872827 kernel: Warning: unable to open an initial console. May 27 18:16:10.872834 kernel: Freeing unused kernel image (initmem) memory: 54416K May 27 18:16:10.872840 kernel: Write protecting the kernel read-only data: 24576k May 27 18:16:10.872847 kernel: Freeing unused kernel image (rodata/data gap) memory: 288K May 27 18:16:10.872854 kernel: Run /init as init process May 27 18:16:10.872862 kernel: with arguments: May 27 18:16:10.872869 kernel: /init May 27 18:16:10.872875 kernel: with environment: May 27 18:16:10.872882 kernel: HOME=/ May 27 18:16:10.872889 kernel: TERM=linux May 27 18:16:10.872910 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 27 18:16:10.872920 systemd[1]: Successfully made /usr/ read-only. May 27 18:16:10.872930 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 18:16:10.872940 systemd[1]: Detected virtualization kvm. May 27 18:16:10.872947 systemd[1]: Detected architecture x86-64. May 27 18:16:10.872954 systemd[1]: Running in initrd. May 27 18:16:10.872961 systemd[1]: No hostname configured, using default hostname. May 27 18:16:10.872969 systemd[1]: Hostname set to . May 27 18:16:10.872976 systemd[1]: Initializing machine ID from random generator. May 27 18:16:10.872983 systemd[1]: Queued start job for default target initrd.target. May 27 18:16:10.872990 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 18:16:10.873000 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 18:16:10.873008 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 27 18:16:10.873016 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 18:16:10.873023 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 27 18:16:10.873031 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 27 18:16:10.873039 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 27 18:16:10.873049 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 27 18:16:10.873059 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 18:16:10.873066 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 18:16:10.873073 systemd[1]: Reached target paths.target - Path Units. May 27 18:16:10.873081 systemd[1]: Reached target slices.target - Slice Units. May 27 18:16:10.873088 systemd[1]: Reached target swap.target - Swaps. May 27 18:16:10.873095 systemd[1]: Reached target timers.target - Timer Units. May 27 18:16:10.873102 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 27 18:16:10.873110 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 18:16:10.873120 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 27 18:16:10.873127 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 27 18:16:10.873134 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 18:16:10.873141 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 18:16:10.873149 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 18:16:10.873156 systemd[1]: Reached target sockets.target - Socket Units. May 27 18:16:10.873186 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 27 18:16:10.873194 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 18:16:10.873201 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 27 18:16:10.873209 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 27 18:16:10.873216 systemd[1]: Starting systemd-fsck-usr.service... May 27 18:16:10.873223 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 18:16:10.873231 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 18:16:10.873238 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 18:16:10.873248 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 27 18:16:10.873255 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 18:16:10.873263 systemd[1]: Finished systemd-fsck-usr.service. May 27 18:16:10.873271 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 18:16:10.873315 systemd-journald[206]: Collecting audit messages is disabled. May 27 18:16:10.873334 systemd-journald[206]: Journal started May 27 18:16:10.873375 systemd-journald[206]: Runtime Journal (/run/log/journal/f1c1d7588c4341eba22c8824ca323cee) is 8M, max 78.5M, 70.5M free. May 27 18:16:10.876187 systemd[1]: Started systemd-journald.service - Journal Service. May 27 18:16:10.876787 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 18:16:10.877609 systemd-modules-load[207]: Inserted module 'overlay' May 27 18:16:10.923181 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 27 18:16:10.924389 kernel: Bridge firewalling registered May 27 18:16:10.924108 systemd-modules-load[207]: Inserted module 'br_netfilter' May 27 18:16:10.927550 systemd-tmpfiles[215]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 27 18:16:10.959855 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 18:16:10.960895 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 18:16:10.962155 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 18:16:10.963417 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 18:16:10.968126 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 27 18:16:10.971267 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 18:16:10.977007 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 18:16:10.988421 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 18:16:10.990401 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 18:16:10.993292 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 18:16:10.997971 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 18:16:11.002620 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 27 18:16:11.019014 dracut-cmdline[244]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=daa3e2d55cc4a7ff0ec15aa9bb0c07df9999cb4e3041f3adad1b1101efdea101 May 27 18:16:11.035750 systemd-resolved[240]: Positive Trust Anchors: May 27 18:16:11.035763 systemd-resolved[240]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 18:16:11.035791 systemd-resolved[240]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 18:16:11.039417 systemd-resolved[240]: Defaulting to hostname 'linux'. May 27 18:16:11.042503 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 18:16:11.043373 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 18:16:11.105203 kernel: SCSI subsystem initialized May 27 18:16:11.113189 kernel: Loading iSCSI transport class v2.0-870. May 27 18:16:11.123188 kernel: iscsi: registered transport (tcp) May 27 18:16:11.143339 kernel: iscsi: registered transport (qla4xxx) May 27 18:16:11.143386 kernel: QLogic iSCSI HBA Driver May 27 18:16:11.162331 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 18:16:11.178051 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 18:16:11.180736 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 18:16:11.227871 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 27 18:16:11.229704 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 27 18:16:11.278196 kernel: raid6: avx2x4 gen() 29797 MB/s May 27 18:16:11.296190 kernel: raid6: avx2x2 gen() 29069 MB/s May 27 18:16:11.314767 kernel: raid6: avx2x1 gen() 20099 MB/s May 27 18:16:11.314789 kernel: raid6: using algorithm avx2x4 gen() 29797 MB/s May 27 18:16:11.333765 kernel: raid6: .... xor() 4501 MB/s, rmw enabled May 27 18:16:11.333817 kernel: raid6: using avx2x2 recovery algorithm May 27 18:16:11.352192 kernel: xor: automatically using best checksumming function avx May 27 18:16:11.482202 kernel: Btrfs loaded, zoned=no, fsverity=no May 27 18:16:11.490601 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 27 18:16:11.493095 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 18:16:11.522954 systemd-udevd[453]: Using default interface naming scheme 'v255'. May 27 18:16:11.528084 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 18:16:11.531117 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 27 18:16:11.549992 dracut-pre-trigger[459]: rd.md=0: removing MD RAID activation May 27 18:16:11.578838 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 27 18:16:11.580710 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 18:16:11.642617 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 18:16:11.646689 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 27 18:16:11.711208 kernel: cryptd: max_cpu_qlen set to 1000 May 27 18:16:11.715214 kernel: libata version 3.00 loaded. May 27 18:16:11.725521 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 27 18:16:11.731189 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues May 27 18:16:11.742183 kernel: scsi host0: Virtio SCSI HBA May 27 18:16:11.744189 kernel: AES CTR mode by8 optimization enabled May 27 18:16:11.836996 kernel: ahci 0000:00:1f.2: version 3.0 May 27 18:16:11.837267 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 27 18:16:11.849188 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 May 27 18:16:11.849836 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode May 27 18:16:11.849477 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 18:16:11.861656 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) May 27 18:16:11.861823 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 27 18:16:11.861990 kernel: scsi host1: ahci May 27 18:16:11.862129 kernel: scsi host2: ahci May 27 18:16:11.863313 kernel: scsi host3: ahci May 27 18:16:11.863467 kernel: scsi host4: ahci May 27 18:16:11.850332 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 18:16:11.901725 kernel: scsi host5: ahci May 27 18:16:11.901885 kernel: scsi host6: ahci May 27 18:16:11.902022 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 31 lpm-pol 0 May 27 18:16:11.902034 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 31 lpm-pol 0 May 27 18:16:11.902044 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 31 lpm-pol 0 May 27 18:16:11.902054 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 31 lpm-pol 0 May 27 18:16:11.902063 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 31 lpm-pol 0 May 27 18:16:11.902076 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 31 lpm-pol 0 May 27 18:16:11.861049 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 27 18:16:11.863978 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 18:16:11.902850 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 18:16:11.966267 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 18:16:12.203925 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 27 18:16:12.203983 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 27 18:16:12.203994 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 27 18:16:12.204013 kernel: ata3: SATA link down (SStatus 0 SControl 300) May 27 18:16:12.204022 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 27 18:16:12.204031 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 27 18:16:12.218320 kernel: sd 0:0:0:0: Power-on or device reset occurred May 27 18:16:12.221732 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) May 27 18:16:12.221888 kernel: sd 0:0:0:0: [sda] Write Protect is off May 27 18:16:12.222025 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 May 27 18:16:12.222155 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 27 18:16:12.260530 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 27 18:16:12.260555 kernel: GPT:9289727 != 167739391 May 27 18:16:12.260566 kernel: GPT:Alternate GPT header not at the end of the disk. May 27 18:16:12.262832 kernel: GPT:9289727 != 167739391 May 27 18:16:12.262846 kernel: GPT: Use GNU Parted to correct GPT errors. May 27 18:16:12.265456 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 27 18:16:12.266856 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 27 18:16:12.315777 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. May 27 18:16:12.325525 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. May 27 18:16:12.340494 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 27 18:16:12.341325 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 27 18:16:12.349233 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. May 27 18:16:12.349821 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. May 27 18:16:12.352645 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 27 18:16:12.353265 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 18:16:12.354517 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 18:16:12.357291 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 27 18:16:12.360269 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 27 18:16:12.368715 disk-uuid[631]: Primary Header is updated. May 27 18:16:12.368715 disk-uuid[631]: Secondary Entries is updated. May 27 18:16:12.368715 disk-uuid[631]: Secondary Header is updated. May 27 18:16:12.372311 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 27 18:16:12.377190 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 27 18:16:12.391208 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 27 18:16:13.395198 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 27 18:16:13.395457 disk-uuid[636]: The operation has completed successfully. May 27 18:16:13.444968 systemd[1]: disk-uuid.service: Deactivated successfully. May 27 18:16:13.445093 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 27 18:16:13.472995 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 27 18:16:13.485071 sh[653]: Success May 27 18:16:13.504479 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 27 18:16:13.504512 kernel: device-mapper: uevent: version 1.0.3 May 27 18:16:13.505198 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 27 18:16:13.517210 kernel: device-mapper: verity: sha256 using shash "sha256-ni" May 27 18:16:13.557145 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 27 18:16:13.559583 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 27 18:16:13.571652 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 27 18:16:13.583475 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 27 18:16:13.583504 kernel: BTRFS: device fsid 7caef027-0915-4c01-a3d5-28eff70f7ebd devid 1 transid 39 /dev/mapper/usr (254:0) scanned by mount (665) May 27 18:16:13.586208 kernel: BTRFS info (device dm-0): first mount of filesystem 7caef027-0915-4c01-a3d5-28eff70f7ebd May 27 18:16:13.590535 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 27 18:16:13.590555 kernel: BTRFS info (device dm-0): using free-space-tree May 27 18:16:13.598763 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 27 18:16:13.599733 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 27 18:16:13.600603 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 27 18:16:13.601303 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 27 18:16:13.603829 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 27 18:16:13.628307 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 (8:6) scanned by mount (696) May 27 18:16:13.628333 kernel: BTRFS info (device sda6): first mount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 18:16:13.631691 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 27 18:16:13.633221 kernel: BTRFS info (device sda6): using free-space-tree May 27 18:16:13.647191 kernel: BTRFS info (device sda6): last unmount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 18:16:13.646399 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 27 18:16:13.649359 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 27 18:16:13.717978 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 18:16:13.725352 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 18:16:13.748913 ignition[767]: Ignition 2.21.0 May 27 18:16:13.749652 ignition[767]: Stage: fetch-offline May 27 18:16:13.749689 ignition[767]: no configs at "/usr/lib/ignition/base.d" May 27 18:16:13.749699 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 27 18:16:13.749773 ignition[767]: parsed url from cmdline: "" May 27 18:16:13.749777 ignition[767]: no config URL provided May 27 18:16:13.749781 ignition[767]: reading system config file "/usr/lib/ignition/user.ign" May 27 18:16:13.749789 ignition[767]: no config at "/usr/lib/ignition/user.ign" May 27 18:16:13.754731 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 27 18:16:13.749793 ignition[767]: failed to fetch config: resource requires networking May 27 18:16:13.749933 ignition[767]: Ignition finished successfully May 27 18:16:13.763695 systemd-networkd[838]: lo: Link UP May 27 18:16:13.763706 systemd-networkd[838]: lo: Gained carrier May 27 18:16:13.765348 systemd-networkd[838]: Enumeration completed May 27 18:16:13.765700 systemd-networkd[838]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 18:16:13.765704 systemd-networkd[838]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 18:16:13.765887 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 18:16:13.766651 systemd-networkd[838]: eth0: Link UP May 27 18:16:13.766654 systemd-networkd[838]: eth0: Gained carrier May 27 18:16:13.766662 systemd-networkd[838]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 18:16:13.766903 systemd[1]: Reached target network.target - Network. May 27 18:16:13.768470 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 27 18:16:13.790789 ignition[842]: Ignition 2.21.0 May 27 18:16:13.790800 ignition[842]: Stage: fetch May 27 18:16:13.790918 ignition[842]: no configs at "/usr/lib/ignition/base.d" May 27 18:16:13.790927 ignition[842]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 27 18:16:13.790993 ignition[842]: parsed url from cmdline: "" May 27 18:16:13.790996 ignition[842]: no config URL provided May 27 18:16:13.791001 ignition[842]: reading system config file "/usr/lib/ignition/user.ign" May 27 18:16:13.791008 ignition[842]: no config at "/usr/lib/ignition/user.ign" May 27 18:16:13.791042 ignition[842]: PUT http://169.254.169.254/v1/token: attempt #1 May 27 18:16:13.791392 ignition[842]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable May 27 18:16:13.992132 ignition[842]: PUT http://169.254.169.254/v1/token: attempt #2 May 27 18:16:13.992253 ignition[842]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable May 27 18:16:14.258224 systemd-networkd[838]: eth0: DHCPv4 address 172.237.129.174/24, gateway 172.237.129.1 acquired from 23.205.166.191 May 27 18:16:14.392351 ignition[842]: PUT http://169.254.169.254/v1/token: attempt #3 May 27 18:16:14.484432 ignition[842]: PUT result: OK May 27 18:16:14.485083 ignition[842]: GET http://169.254.169.254/v1/user-data: attempt #1 May 27 18:16:14.600043 ignition[842]: GET result: OK May 27 18:16:14.600243 ignition[842]: parsing config with SHA512: d31a85dc1e2225697bf3b24889be0b668e25f2237394603fa44ddf6f8c180fc36f5b6c30ba9bb6b9dead42e40f64e5e6786b1b7aec228b7a338ed6a4af645088 May 27 18:16:14.606669 unknown[842]: fetched base config from "system" May 27 18:16:14.606684 unknown[842]: fetched base config from "system" May 27 18:16:14.607059 ignition[842]: fetch: fetch complete May 27 18:16:14.606690 unknown[842]: fetched user config from "akamai" May 27 18:16:14.607070 ignition[842]: fetch: fetch passed May 27 18:16:14.610478 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 27 18:16:14.607126 ignition[842]: Ignition finished successfully May 27 18:16:14.614297 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 27 18:16:14.656213 ignition[849]: Ignition 2.21.0 May 27 18:16:14.656227 ignition[849]: Stage: kargs May 27 18:16:14.656338 ignition[849]: no configs at "/usr/lib/ignition/base.d" May 27 18:16:14.656348 ignition[849]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 27 18:16:14.656887 ignition[849]: kargs: kargs passed May 27 18:16:14.658472 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 27 18:16:14.656923 ignition[849]: Ignition finished successfully May 27 18:16:14.660953 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 27 18:16:14.677512 ignition[855]: Ignition 2.21.0 May 27 18:16:14.677526 ignition[855]: Stage: disks May 27 18:16:14.677654 ignition[855]: no configs at "/usr/lib/ignition/base.d" May 27 18:16:14.677666 ignition[855]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 27 18:16:14.681711 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 27 18:16:14.678593 ignition[855]: disks: disks passed May 27 18:16:14.682446 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 27 18:16:14.678636 ignition[855]: Ignition finished successfully May 27 18:16:14.683475 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 27 18:16:14.684522 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 18:16:14.685460 systemd[1]: Reached target sysinit.target - System Initialization. May 27 18:16:14.686662 systemd[1]: Reached target basic.target - Basic System. May 27 18:16:14.688442 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 27 18:16:14.722935 systemd-fsck[863]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 27 18:16:14.727079 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 27 18:16:14.729455 systemd[1]: Mounting sysroot.mount - /sysroot... May 27 18:16:14.836188 kernel: EXT4-fs (sda9): mounted filesystem bf93e767-f532-4480-b210-a196f7ac181e r/w with ordered data mode. Quota mode: none. May 27 18:16:14.836211 systemd[1]: Mounted sysroot.mount - /sysroot. May 27 18:16:14.837107 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 27 18:16:14.839030 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 18:16:14.841230 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 27 18:16:14.842208 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 27 18:16:14.842248 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 27 18:16:14.842270 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 27 18:16:14.849005 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 27 18:16:14.850414 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 27 18:16:14.858189 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 (8:6) scanned by mount (871) May 27 18:16:14.864258 kernel: BTRFS info (device sda6): first mount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 18:16:14.864282 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 27 18:16:14.864293 kernel: BTRFS info (device sda6): using free-space-tree May 27 18:16:14.870365 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 18:16:14.896375 initrd-setup-root[895]: cut: /sysroot/etc/passwd: No such file or directory May 27 18:16:14.900659 initrd-setup-root[902]: cut: /sysroot/etc/group: No such file or directory May 27 18:16:14.905038 initrd-setup-root[909]: cut: /sysroot/etc/shadow: No such file or directory May 27 18:16:14.908552 initrd-setup-root[916]: cut: /sysroot/etc/gshadow: No such file or directory May 27 18:16:14.988247 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 27 18:16:14.990012 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 27 18:16:14.991705 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 27 18:16:15.006154 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 27 18:16:15.009221 kernel: BTRFS info (device sda6): last unmount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 18:16:15.028616 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 27 18:16:15.032361 ignition[983]: INFO : Ignition 2.21.0 May 27 18:16:15.032361 ignition[983]: INFO : Stage: mount May 27 18:16:15.033519 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 18:16:15.033519 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 27 18:16:15.033519 ignition[983]: INFO : mount: mount passed May 27 18:16:15.033519 ignition[983]: INFO : Ignition finished successfully May 27 18:16:15.034473 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 27 18:16:15.037238 systemd[1]: Starting ignition-files.service - Ignition (files)... May 27 18:16:15.590943 systemd-networkd[838]: eth0: Gained IPv6LL May 27 18:16:15.838397 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 18:16:15.859446 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 (8:6) scanned by mount (996) May 27 18:16:15.859533 kernel: BTRFS info (device sda6): first mount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 18:16:15.863346 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 27 18:16:15.863384 kernel: BTRFS info (device sda6): using free-space-tree May 27 18:16:15.869105 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 18:16:15.900436 ignition[1013]: INFO : Ignition 2.21.0 May 27 18:16:15.900436 ignition[1013]: INFO : Stage: files May 27 18:16:15.901740 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 18:16:15.901740 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 27 18:16:15.901740 ignition[1013]: DEBUG : files: compiled without relabeling support, skipping May 27 18:16:15.905884 ignition[1013]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 27 18:16:15.905884 ignition[1013]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 27 18:16:15.909066 ignition[1013]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 27 18:16:15.909856 ignition[1013]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 27 18:16:15.909856 ignition[1013]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 27 18:16:15.909718 unknown[1013]: wrote ssh authorized keys file for user: core May 27 18:16:15.912102 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 27 18:16:15.912102 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 27 18:16:16.109069 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 27 18:16:16.313568 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 27 18:16:16.313568 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 27 18:16:16.315745 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 27 18:16:16.315745 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 27 18:16:16.315745 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 27 18:16:16.315745 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 18:16:16.315745 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 18:16:16.315745 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 18:16:16.315745 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 18:16:16.321537 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 27 18:16:16.321537 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 27 18:16:16.321537 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 27 18:16:16.321537 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 27 18:16:16.321537 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 27 18:16:16.321537 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 May 27 18:16:16.915675 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 27 18:16:17.187879 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 27 18:16:17.187879 ignition[1013]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 27 18:16:17.190447 ignition[1013]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 18:16:17.191928 ignition[1013]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 18:16:17.191928 ignition[1013]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 27 18:16:17.191928 ignition[1013]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 27 18:16:17.194447 ignition[1013]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 27 18:16:17.194447 ignition[1013]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 27 18:16:17.194447 ignition[1013]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 27 18:16:17.194447 ignition[1013]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" May 27 18:16:17.194447 ignition[1013]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" May 27 18:16:17.194447 ignition[1013]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" May 27 18:16:17.194447 ignition[1013]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" May 27 18:16:17.194447 ignition[1013]: INFO : files: files passed May 27 18:16:17.194447 ignition[1013]: INFO : Ignition finished successfully May 27 18:16:17.195470 systemd[1]: Finished ignition-files.service - Ignition (files). May 27 18:16:17.200288 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 27 18:16:17.202868 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 27 18:16:17.213310 systemd[1]: ignition-quench.service: Deactivated successfully. May 27 18:16:17.214236 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 27 18:16:17.219850 initrd-setup-root-after-ignition[1042]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 18:16:17.219850 initrd-setup-root-after-ignition[1042]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 27 18:16:17.221755 initrd-setup-root-after-ignition[1046]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 18:16:17.222374 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 18:16:17.223478 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 27 18:16:17.225400 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 27 18:16:17.286255 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 27 18:16:17.286376 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 27 18:16:17.287698 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 27 18:16:17.288670 systemd[1]: Reached target initrd.target - Initrd Default Target. May 27 18:16:17.290141 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 27 18:16:17.290840 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 27 18:16:17.320436 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 18:16:17.322767 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 27 18:16:17.337927 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 27 18:16:17.338571 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 18:16:17.339190 systemd[1]: Stopped target timers.target - Timer Units. May 27 18:16:17.339730 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 27 18:16:17.339830 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 18:16:17.340941 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 27 18:16:17.341693 systemd[1]: Stopped target basic.target - Basic System. May 27 18:16:17.342841 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 27 18:16:17.343840 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 27 18:16:17.344882 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 27 18:16:17.346082 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 27 18:16:17.347188 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 27 18:16:17.348230 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 27 18:16:17.349462 systemd[1]: Stopped target sysinit.target - System Initialization. May 27 18:16:17.350661 systemd[1]: Stopped target local-fs.target - Local File Systems. May 27 18:16:17.351843 systemd[1]: Stopped target swap.target - Swaps. May 27 18:16:17.352969 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 27 18:16:17.353098 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 27 18:16:17.354604 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 27 18:16:17.355416 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 18:16:17.356542 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 27 18:16:17.358471 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 18:16:17.359390 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 27 18:16:17.359486 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 27 18:16:17.361045 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 27 18:16:17.361210 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 18:16:17.361929 systemd[1]: ignition-files.service: Deactivated successfully. May 27 18:16:17.362056 systemd[1]: Stopped ignition-files.service - Ignition (files). May 27 18:16:17.365246 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 27 18:16:17.365991 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 27 18:16:17.366096 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 27 18:16:17.369049 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 27 18:16:17.370002 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 27 18:16:17.371250 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 27 18:16:17.373289 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 27 18:16:17.373394 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 27 18:16:17.380613 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 27 18:16:17.381391 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 27 18:16:17.397497 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 27 18:16:17.400180 ignition[1066]: INFO : Ignition 2.21.0 May 27 18:16:17.400180 ignition[1066]: INFO : Stage: umount May 27 18:16:17.400180 ignition[1066]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 18:16:17.400180 ignition[1066]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 27 18:16:17.403273 ignition[1066]: INFO : umount: umount passed May 27 18:16:17.403273 ignition[1066]: INFO : Ignition finished successfully May 27 18:16:17.402595 systemd[1]: ignition-mount.service: Deactivated successfully. May 27 18:16:17.402731 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 27 18:16:17.425385 systemd[1]: sysroot-boot.service: Deactivated successfully. May 27 18:16:17.425481 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 27 18:16:17.426389 systemd[1]: ignition-disks.service: Deactivated successfully. May 27 18:16:17.426463 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 27 18:16:17.427269 systemd[1]: ignition-kargs.service: Deactivated successfully. May 27 18:16:17.427315 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 27 18:16:17.428271 systemd[1]: ignition-fetch.service: Deactivated successfully. May 27 18:16:17.428316 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 27 18:16:17.429268 systemd[1]: Stopped target network.target - Network. May 27 18:16:17.430235 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 27 18:16:17.430283 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 27 18:16:17.431244 systemd[1]: Stopped target paths.target - Path Units. May 27 18:16:17.432157 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 27 18:16:17.436510 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 18:16:17.437357 systemd[1]: Stopped target slices.target - Slice Units. May 27 18:16:17.438343 systemd[1]: Stopped target sockets.target - Socket Units. May 27 18:16:17.439512 systemd[1]: iscsid.socket: Deactivated successfully. May 27 18:16:17.439556 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 27 18:16:17.440752 systemd[1]: iscsiuio.socket: Deactivated successfully. May 27 18:16:17.440791 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 18:16:17.441767 systemd[1]: ignition-setup.service: Deactivated successfully. May 27 18:16:17.441818 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 27 18:16:17.442795 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 27 18:16:17.442837 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 27 18:16:17.443796 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 27 18:16:17.443845 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 27 18:16:17.444931 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 27 18:16:17.445976 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 27 18:16:17.449581 systemd[1]: systemd-resolved.service: Deactivated successfully. May 27 18:16:17.449716 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 27 18:16:17.452728 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 27 18:16:17.452990 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 27 18:16:17.453038 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 18:16:17.454835 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 27 18:16:17.457310 systemd[1]: systemd-networkd.service: Deactivated successfully. May 27 18:16:17.457420 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 27 18:16:17.459545 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 27 18:16:17.459724 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 27 18:16:17.460339 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 27 18:16:17.460375 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 27 18:16:17.462231 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 27 18:16:17.464009 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 27 18:16:17.464059 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 18:16:17.466574 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 18:16:17.466625 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 18:16:17.468265 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 27 18:16:17.468312 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 27 18:16:17.469128 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 18:16:17.473146 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 27 18:16:17.483667 systemd[1]: network-cleanup.service: Deactivated successfully. May 27 18:16:17.484316 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 27 18:16:17.486504 systemd[1]: systemd-udevd.service: Deactivated successfully. May 27 18:16:17.486673 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 18:16:17.488066 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 27 18:16:17.488131 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 27 18:16:17.489009 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 27 18:16:17.489044 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 27 18:16:17.490106 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 27 18:16:17.490154 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 27 18:16:17.491763 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 27 18:16:17.491809 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 27 18:16:17.492890 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 27 18:16:17.492939 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 18:16:17.496257 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 27 18:16:17.496992 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 27 18:16:17.497045 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 27 18:16:17.498724 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 27 18:16:17.498770 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 18:16:17.500255 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 18:16:17.500302 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 18:16:17.508907 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 27 18:16:17.509720 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 27 18:16:17.510572 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 27 18:16:17.512699 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 27 18:16:17.523102 systemd[1]: Switching root. May 27 18:16:17.548406 systemd-journald[206]: Journal stopped May 27 18:16:18.560716 systemd-journald[206]: Received SIGTERM from PID 1 (systemd). May 27 18:16:18.560748 kernel: SELinux: policy capability network_peer_controls=1 May 27 18:16:18.560760 kernel: SELinux: policy capability open_perms=1 May 27 18:16:18.560773 kernel: SELinux: policy capability extended_socket_class=1 May 27 18:16:18.560781 kernel: SELinux: policy capability always_check_network=0 May 27 18:16:18.560789 kernel: SELinux: policy capability cgroup_seclabel=1 May 27 18:16:18.560798 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 27 18:16:18.560807 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 27 18:16:18.560815 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 27 18:16:18.560823 kernel: SELinux: policy capability userspace_initial_context=0 May 27 18:16:18.560834 kernel: audit: type=1403 audit(1748369777.693:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 27 18:16:18.560843 systemd[1]: Successfully loaded SELinux policy in 51.417ms. May 27 18:16:18.560854 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.904ms. May 27 18:16:18.560864 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 18:16:18.560874 systemd[1]: Detected virtualization kvm. May 27 18:16:18.560887 systemd[1]: Detected architecture x86-64. May 27 18:16:18.560897 systemd[1]: Detected first boot. May 27 18:16:18.560906 systemd[1]: Initializing machine ID from random generator. May 27 18:16:18.560915 zram_generator::config[1109]: No configuration found. May 27 18:16:18.560925 kernel: Guest personality initialized and is inactive May 27 18:16:18.560934 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 27 18:16:18.560942 kernel: Initialized host personality May 27 18:16:18.560953 kernel: NET: Registered PF_VSOCK protocol family May 27 18:16:18.560962 systemd[1]: Populated /etc with preset unit settings. May 27 18:16:18.560972 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 27 18:16:18.560981 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 27 18:16:18.560990 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 27 18:16:18.561000 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 27 18:16:18.561009 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 27 18:16:18.561021 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 27 18:16:18.561030 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 27 18:16:18.561040 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 27 18:16:18.561049 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 27 18:16:18.561059 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 27 18:16:18.561068 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 27 18:16:18.561078 systemd[1]: Created slice user.slice - User and Session Slice. May 27 18:16:18.561091 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 18:16:18.561101 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 18:16:18.561111 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 27 18:16:18.561120 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 27 18:16:18.561132 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 27 18:16:18.561142 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 18:16:18.561152 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 27 18:16:18.563717 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 18:16:18.563743 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 18:16:18.563754 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 27 18:16:18.563763 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 27 18:16:18.563773 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 27 18:16:18.563783 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 27 18:16:18.563793 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 18:16:18.563803 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 18:16:18.563812 systemd[1]: Reached target slices.target - Slice Units. May 27 18:16:18.563824 systemd[1]: Reached target swap.target - Swaps. May 27 18:16:18.563834 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 27 18:16:18.563844 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 27 18:16:18.563853 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 27 18:16:18.563864 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 18:16:18.563876 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 18:16:18.563886 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 18:16:18.563896 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 27 18:16:18.563905 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 27 18:16:18.563915 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 27 18:16:18.563925 systemd[1]: Mounting media.mount - External Media Directory... May 27 18:16:18.563935 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 18:16:18.563945 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 27 18:16:18.563958 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 27 18:16:18.563968 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 27 18:16:18.563979 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 27 18:16:18.563988 systemd[1]: Reached target machines.target - Containers. May 27 18:16:18.563998 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 27 18:16:18.564008 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 18:16:18.564018 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 18:16:18.564028 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 27 18:16:18.564040 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 18:16:18.564049 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 18:16:18.564059 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 18:16:18.564069 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 27 18:16:18.564079 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 18:16:18.564089 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 27 18:16:18.564099 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 27 18:16:18.564109 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 27 18:16:18.564119 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 27 18:16:18.564130 systemd[1]: Stopped systemd-fsck-usr.service. May 27 18:16:18.564140 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 18:16:18.564150 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 18:16:18.564160 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 18:16:18.564190 kernel: fuse: init (API version 7.41) May 27 18:16:18.564200 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 18:16:18.564210 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 27 18:16:18.564245 systemd-journald[1188]: Collecting audit messages is disabled. May 27 18:16:18.564268 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 27 18:16:18.564279 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 18:16:18.564289 systemd[1]: verity-setup.service: Deactivated successfully. May 27 18:16:18.564299 systemd[1]: Stopped verity-setup.service. May 27 18:16:18.564311 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 18:16:18.564322 systemd-journald[1188]: Journal started May 27 18:16:18.564342 systemd-journald[1188]: Runtime Journal (/run/log/journal/5a44a00830e3414b88a54c38bbf523e4) is 8M, max 78.5M, 70.5M free. May 27 18:16:18.254149 systemd[1]: Queued start job for default target multi-user.target. May 27 18:16:18.268296 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 27 18:16:18.268998 systemd[1]: systemd-journald.service: Deactivated successfully. May 27 18:16:18.576894 systemd[1]: Started systemd-journald.service - Journal Service. May 27 18:16:18.576923 kernel: loop: module loaded May 27 18:16:18.568384 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 27 18:16:18.570347 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 27 18:16:18.570970 systemd[1]: Mounted media.mount - External Media Directory. May 27 18:16:18.571585 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 27 18:16:18.572288 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 27 18:16:18.574604 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 27 18:16:18.575387 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 18:16:18.577661 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 27 18:16:18.577876 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 27 18:16:18.578797 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 18:16:18.579009 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 18:16:18.580023 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 18:16:18.580425 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 18:16:18.582982 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 27 18:16:18.583246 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 27 18:16:18.584040 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 18:16:18.584252 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 18:16:18.585075 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 18:16:18.585910 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 27 18:16:18.593758 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 27 18:16:18.612278 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 27 18:16:18.614323 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 27 18:16:18.615038 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 27 18:16:18.615111 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 18:16:18.617019 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 27 18:16:18.620638 kernel: ACPI: bus type drm_connector registered May 27 18:16:18.623371 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 27 18:16:18.624035 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 18:16:18.627398 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 27 18:16:18.630388 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 27 18:16:18.631246 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 18:16:18.633382 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 27 18:16:18.633976 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 18:16:18.638012 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 18:16:18.643432 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 27 18:16:18.646392 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 27 18:16:18.650755 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 18:16:18.651483 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 18:16:18.653685 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 18:16:18.656360 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 27 18:16:18.657821 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 27 18:16:18.659737 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 27 18:16:18.671248 systemd-journald[1188]: Time spent on flushing to /var/log/journal/5a44a00830e3414b88a54c38bbf523e4 is 66.543ms for 993 entries. May 27 18:16:18.671248 systemd-journald[1188]: System Journal (/var/log/journal/5a44a00830e3414b88a54c38bbf523e4) is 8M, max 195.6M, 187.6M free. May 27 18:16:18.753311 systemd-journald[1188]: Received client request to flush runtime journal. May 27 18:16:18.753368 kernel: loop0: detected capacity change from 0 to 224512 May 27 18:16:18.675612 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 18:16:18.694621 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 27 18:16:18.695895 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 27 18:16:18.701638 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 27 18:16:18.755828 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 27 18:16:18.758380 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 18:16:18.762660 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 27 18:16:18.776520 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 27 18:16:18.784295 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 18:16:18.789224 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 27 18:16:18.813982 kernel: loop1: detected capacity change from 0 to 8 May 27 18:16:18.832197 kernel: loop2: detected capacity change from 0 to 146240 May 27 18:16:18.833875 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 18:16:18.842400 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. May 27 18:16:18.842418 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. May 27 18:16:18.847651 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 18:16:18.878210 kernel: loop3: detected capacity change from 0 to 113872 May 27 18:16:18.915212 kernel: loop4: detected capacity change from 0 to 224512 May 27 18:16:18.943331 kernel: loop5: detected capacity change from 0 to 8 May 27 18:16:18.948238 kernel: loop6: detected capacity change from 0 to 146240 May 27 18:16:18.974195 kernel: loop7: detected capacity change from 0 to 113872 May 27 18:16:18.989408 (sd-merge)[1261]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. May 27 18:16:18.990905 (sd-merge)[1261]: Merged extensions into '/usr'. May 27 18:16:19.000925 systemd[1]: Reload requested from client PID 1231 ('systemd-sysext') (unit systemd-sysext.service)... May 27 18:16:19.001237 systemd[1]: Reloading... May 27 18:16:19.118214 zram_generator::config[1293]: No configuration found. May 27 18:16:19.241273 ldconfig[1226]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 27 18:16:19.254775 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 18:16:19.325919 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 27 18:16:19.326240 systemd[1]: Reloading finished in 324 ms. May 27 18:16:19.350772 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 27 18:16:19.351905 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 27 18:16:19.352919 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 27 18:16:19.363305 systemd[1]: Starting ensure-sysext.service... May 27 18:16:19.367293 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 18:16:19.377379 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 18:16:19.393339 systemd[1]: Reload requested from client PID 1331 ('systemctl') (unit ensure-sysext.service)... May 27 18:16:19.393486 systemd[1]: Reloading... May 27 18:16:19.398221 systemd-tmpfiles[1332]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 27 18:16:19.398270 systemd-tmpfiles[1332]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 27 18:16:19.398572 systemd-tmpfiles[1332]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 27 18:16:19.398855 systemd-tmpfiles[1332]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 27 18:16:19.401611 systemd-tmpfiles[1332]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 27 18:16:19.401817 systemd-tmpfiles[1332]: ACLs are not supported, ignoring. May 27 18:16:19.401882 systemd-tmpfiles[1332]: ACLs are not supported, ignoring. May 27 18:16:19.408448 systemd-tmpfiles[1332]: Detected autofs mount point /boot during canonicalization of boot. May 27 18:16:19.408468 systemd-tmpfiles[1332]: Skipping /boot May 27 18:16:19.431726 systemd-tmpfiles[1332]: Detected autofs mount point /boot during canonicalization of boot. May 27 18:16:19.431743 systemd-tmpfiles[1332]: Skipping /boot May 27 18:16:19.449085 systemd-udevd[1333]: Using default interface naming scheme 'v255'. May 27 18:16:19.492203 zram_generator::config[1362]: No configuration found. May 27 18:16:19.666802 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 18:16:19.779108 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 27 18:16:19.779801 systemd[1]: Reloading finished in 385 ms. May 27 18:16:19.782202 kernel: mousedev: PS/2 mouse device common for all mice May 27 18:16:19.786062 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 18:16:19.788753 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 18:16:19.802470 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 27 18:16:19.823201 kernel: ACPI: button: Power Button [PWRF] May 27 18:16:19.843186 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 18:16:19.845349 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 18:16:19.848552 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 27 18:16:19.849299 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 18:16:19.852252 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 18:16:19.853804 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 18:16:19.864475 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 18:16:19.865129 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 18:16:19.865227 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 18:16:19.867258 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 27 18:16:19.871648 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 18:16:19.880402 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 18:16:19.884035 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 27 18:16:19.884724 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 18:16:19.891228 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 18:16:19.891438 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 18:16:19.906633 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 18:16:19.908346 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 18:16:19.908441 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 18:16:19.908549 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 18:16:19.909440 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 18:16:19.909883 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 18:16:19.911530 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 18:16:19.912214 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 18:16:19.913614 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 18:16:19.914603 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 18:16:19.924063 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 18:16:19.924730 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 18:16:19.927573 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 27 18:16:19.929134 systemd[1]: Finished ensure-sysext.service. May 27 18:16:19.937486 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 27 18:16:19.950674 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 27 18:16:19.957237 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 27 18:16:19.964203 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 27 18:16:19.964549 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 27 18:16:19.970363 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 27 18:16:19.976676 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 18:16:19.977625 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 18:16:20.003099 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 27 18:16:20.004138 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 27 18:16:20.014453 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 27 18:16:20.031303 augenrules[1494]: No rules May 27 18:16:20.031242 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 18:16:20.032021 systemd[1]: audit-rules.service: Deactivated successfully. May 27 18:16:20.032275 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 18:16:20.046604 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 27 18:16:20.053333 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 27 18:16:20.069328 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 27 18:16:20.085375 kernel: EDAC MC: Ver: 3.0.0 May 27 18:16:20.088086 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 27 18:16:20.193001 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 18:16:20.270918 systemd-resolved[1455]: Positive Trust Anchors: May 27 18:16:20.270941 systemd-resolved[1455]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 18:16:20.270968 systemd-resolved[1455]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 18:16:20.274478 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 27 18:16:20.275323 systemd[1]: Reached target time-set.target - System Time Set. May 27 18:16:20.275760 systemd-resolved[1455]: Defaulting to hostname 'linux'. May 27 18:16:20.279336 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 18:16:20.281498 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 18:16:20.282066 systemd[1]: Reached target sysinit.target - System Initialization. May 27 18:16:20.282250 systemd-networkd[1451]: lo: Link UP May 27 18:16:20.282259 systemd-networkd[1451]: lo: Gained carrier May 27 18:16:20.282701 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 27 18:16:20.283688 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 27 18:16:20.283920 systemd-networkd[1451]: Enumeration completed May 27 18:16:20.284370 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 27 18:16:20.284545 systemd-networkd[1451]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 18:16:20.284550 systemd-networkd[1451]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 18:16:20.285306 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 27 18:16:20.285554 systemd-networkd[1451]: eth0: Link UP May 27 18:16:20.286002 systemd-networkd[1451]: eth0: Gained carrier May 27 18:16:20.286023 systemd-networkd[1451]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 18:16:20.286154 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 27 18:16:20.286822 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 27 18:16:20.287482 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 27 18:16:20.287560 systemd[1]: Reached target paths.target - Path Units. May 27 18:16:20.288300 systemd[1]: Reached target timers.target - Timer Units. May 27 18:16:20.290134 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 27 18:16:20.292653 systemd[1]: Starting docker.socket - Docker Socket for the API... May 27 18:16:20.296240 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 27 18:16:20.297161 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 27 18:16:20.297753 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 27 18:16:20.301513 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 27 18:16:20.302469 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 27 18:16:20.303620 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 18:16:20.304317 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 27 18:16:20.305613 systemd[1]: Reached target network.target - Network. May 27 18:16:20.306107 systemd[1]: Reached target sockets.target - Socket Units. May 27 18:16:20.306637 systemd[1]: Reached target basic.target - Basic System. May 27 18:16:20.307201 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 27 18:16:20.307232 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 27 18:16:20.308274 systemd[1]: Starting containerd.service - containerd container runtime... May 27 18:16:20.312289 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 27 18:16:20.326778 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 27 18:16:20.328359 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 27 18:16:20.333650 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 27 18:16:20.335420 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 27 18:16:20.335983 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 27 18:16:20.336952 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 27 18:16:20.352565 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 27 18:16:20.355578 oslogin_cache_refresh[1527]: Refreshing passwd entry cache May 27 18:16:20.354089 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 27 18:16:20.362493 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Refreshing passwd entry cache May 27 18:16:20.362493 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Failure getting users, quitting May 27 18:16:20.362493 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 18:16:20.362493 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Refreshing group entry cache May 27 18:16:20.362493 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Failure getting groups, quitting May 27 18:16:20.362493 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 18:16:20.357125 oslogin_cache_refresh[1527]: Failure getting users, quitting May 27 18:16:20.357394 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 27 18:16:20.357138 oslogin_cache_refresh[1527]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 18:16:20.359230 oslogin_cache_refresh[1527]: Refreshing group entry cache May 27 18:16:20.359617 oslogin_cache_refresh[1527]: Failure getting groups, quitting May 27 18:16:20.359625 oslogin_cache_refresh[1527]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 18:16:20.365633 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 27 18:16:20.373252 jq[1525]: false May 27 18:16:20.374364 systemd[1]: Starting systemd-logind.service - User Login Management... May 27 18:16:20.379703 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 27 18:16:20.382978 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 27 18:16:20.385763 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 27 18:16:20.386668 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 27 18:16:20.393356 systemd[1]: Starting update-engine.service - Update Engine... May 27 18:16:20.395684 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 27 18:16:20.400679 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 27 18:16:20.401937 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 27 18:16:20.403201 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 27 18:16:20.403563 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 27 18:16:20.403782 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 27 18:16:20.408875 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 27 18:16:20.409191 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 27 18:16:20.443084 jq[1541]: true May 27 18:16:20.456378 update_engine[1540]: I20250527 18:16:20.455860 1540 main.cc:92] Flatcar Update Engine starting May 27 18:16:20.457790 extend-filesystems[1526]: Found loop4 May 27 18:16:20.457790 extend-filesystems[1526]: Found loop5 May 27 18:16:20.457790 extend-filesystems[1526]: Found loop6 May 27 18:16:20.457790 extend-filesystems[1526]: Found loop7 May 27 18:16:20.457790 extend-filesystems[1526]: Found sda May 27 18:16:20.457790 extend-filesystems[1526]: Found sda1 May 27 18:16:20.457790 extend-filesystems[1526]: Found sda2 May 27 18:16:20.457790 extend-filesystems[1526]: Found sda3 May 27 18:16:20.457790 extend-filesystems[1526]: Found usr May 27 18:16:20.457790 extend-filesystems[1526]: Found sda4 May 27 18:16:20.457790 extend-filesystems[1526]: Found sda6 May 27 18:16:20.457790 extend-filesystems[1526]: Found sda7 May 27 18:16:20.534744 extend-filesystems[1526]: Found sda9 May 27 18:16:20.534744 extend-filesystems[1526]: Checking size of /dev/sda9 May 27 18:16:20.534744 extend-filesystems[1526]: Resized partition /dev/sda9 May 27 18:16:20.582327 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks May 27 18:16:20.582359 update_engine[1540]: I20250527 18:16:20.518848 1540 update_check_scheduler.cc:74] Next update check in 5m37s May 27 18:16:20.514505 dbus-daemon[1523]: [system] SELinux support is enabled May 27 18:16:20.491712 systemd[1]: motdgen.service: Deactivated successfully. May 27 18:16:20.582817 extend-filesystems[1566]: resize2fs 1.47.2 (1-Jan-2025) May 27 18:16:20.584290 coreos-metadata[1522]: May 27 18:16:20.554 INFO Putting http://169.254.169.254/v1/token: Attempt #1 May 27 18:16:20.584498 jq[1560]: true May 27 18:16:20.496410 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 27 18:16:20.584718 tar[1545]: linux-amd64/LICENSE May 27 18:16:20.496693 (ntainerd)[1559]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 27 18:16:20.536594 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 27 18:16:20.537732 systemd-logind[1534]: Watching system buttons on /dev/input/event2 (Power Button) May 27 18:16:20.537755 systemd-logind[1534]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 27 18:16:20.541757 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 27 18:16:20.548292 systemd-logind[1534]: New seat seat0. May 27 18:16:20.577266 systemd[1]: Started systemd-logind.service - User Login Management. May 27 18:16:20.581621 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 27 18:16:20.581654 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 27 18:16:20.583386 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 27 18:16:20.583404 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 27 18:16:20.588946 tar[1545]: linux-amd64/helm May 27 18:16:20.590576 systemd[1]: Started update-engine.service - Update Engine. May 27 18:16:20.595358 dbus-daemon[1523]: [system] Successfully activated service 'org.freedesktop.systemd1' May 27 18:16:20.603035 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 27 18:16:20.723040 bash[1594]: Updated "/home/core/.ssh/authorized_keys" May 27 18:16:20.730481 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 27 18:16:20.734623 systemd[1]: Starting sshkeys.service... May 27 18:16:20.768668 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 27 18:16:20.771259 systemd-networkd[1451]: eth0: DHCPv4 address 172.237.129.174/24, gateway 172.237.129.1 acquired from 23.205.166.191 May 27 18:16:20.771642 dbus-daemon[1523]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.3' (uid=244 pid=1451 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 27 18:16:20.775810 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 27 18:16:20.776005 systemd-timesyncd[1473]: Network configuration changed, trying to establish connection. May 27 18:16:20.782120 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... May 27 18:16:20.831052 containerd[1559]: time="2025-05-27T18:16:20Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 27 18:16:20.837425 containerd[1559]: time="2025-05-27T18:16:20.834327574Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 27 18:16:20.860485 containerd[1559]: time="2025-05-27T18:16:20.854757574Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.01µs" May 27 18:16:20.860485 containerd[1559]: time="2025-05-27T18:16:20.854784034Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 27 18:16:20.860485 containerd[1559]: time="2025-05-27T18:16:20.854801614Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 27 18:16:20.860485 containerd[1559]: time="2025-05-27T18:16:20.855505784Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 27 18:16:20.860485 containerd[1559]: time="2025-05-27T18:16:20.855523024Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 27 18:16:20.860485 containerd[1559]: time="2025-05-27T18:16:20.855629134Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 18:16:20.860485 containerd[1559]: time="2025-05-27T18:16:20.855698864Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 18:16:20.860485 containerd[1559]: time="2025-05-27T18:16:20.855709664Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 18:16:20.860485 containerd[1559]: time="2025-05-27T18:16:20.856306215Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 18:16:20.860485 containerd[1559]: time="2025-05-27T18:16:20.856321955Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 18:16:20.860485 containerd[1559]: time="2025-05-27T18:16:20.856331555Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 18:16:20.860485 containerd[1559]: time="2025-05-27T18:16:20.856339255Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 27 18:16:20.875095 containerd[1559]: time="2025-05-27T18:16:20.856818965Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 27 18:16:20.875095 containerd[1559]: time="2025-05-27T18:16:20.857371675Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 18:16:20.875095 containerd[1559]: time="2025-05-27T18:16:20.857404755Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 18:16:20.875095 containerd[1559]: time="2025-05-27T18:16:20.857413725Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 27 18:16:20.875095 containerd[1559]: time="2025-05-27T18:16:20.857639725Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 27 18:16:20.875095 containerd[1559]: time="2025-05-27T18:16:20.858204846Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 27 18:16:20.875095 containerd[1559]: time="2025-05-27T18:16:20.859216696Z" level=info msg="metadata content store policy set" policy=shared May 27 18:16:20.878743 containerd[1559]: time="2025-05-27T18:16:20.878707966Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 27 18:16:20.878781 containerd[1559]: time="2025-05-27T18:16:20.878752476Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 27 18:16:20.878781 containerd[1559]: time="2025-05-27T18:16:20.878767166Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 27 18:16:20.878781 containerd[1559]: time="2025-05-27T18:16:20.878777956Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 27 18:16:20.878832 containerd[1559]: time="2025-05-27T18:16:20.878789756Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 27 18:16:20.878832 containerd[1559]: time="2025-05-27T18:16:20.878803436Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 27 18:16:20.878832 containerd[1559]: time="2025-05-27T18:16:20.878815426Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 27 18:16:20.878924 containerd[1559]: time="2025-05-27T18:16:20.878895406Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 27 18:16:20.878974 containerd[1559]: time="2025-05-27T18:16:20.878919596Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 27 18:16:20.879068 containerd[1559]: time="2025-05-27T18:16:20.879004106Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 27 18:16:20.879068 containerd[1559]: time="2025-05-27T18:16:20.879063676Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 27 18:16:20.879116 containerd[1559]: time="2025-05-27T18:16:20.879076266Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 27 18:16:20.880864 containerd[1559]: time="2025-05-27T18:16:20.880827647Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 27 18:16:20.880897 containerd[1559]: time="2025-05-27T18:16:20.880879857Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 27 18:16:20.880918 containerd[1559]: time="2025-05-27T18:16:20.880899127Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 27 18:16:20.880918 containerd[1559]: time="2025-05-27T18:16:20.880911717Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 27 18:16:20.881233 containerd[1559]: time="2025-05-27T18:16:20.880928877Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 27 18:16:20.881233 containerd[1559]: time="2025-05-27T18:16:20.881230367Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 27 18:16:20.881611 containerd[1559]: time="2025-05-27T18:16:20.881578327Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 27 18:16:20.881611 containerd[1559]: time="2025-05-27T18:16:20.881605007Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 27 18:16:20.881658 containerd[1559]: time="2025-05-27T18:16:20.881617097Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 27 18:16:20.881743 containerd[1559]: time="2025-05-27T18:16:20.881628197Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 27 18:16:20.881743 containerd[1559]: time="2025-05-27T18:16:20.881732497Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 27 18:16:20.883292 containerd[1559]: time="2025-05-27T18:16:20.883237808Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 27 18:16:20.883292 containerd[1559]: time="2025-05-27T18:16:20.883287768Z" level=info msg="Start snapshots syncer" May 27 18:16:20.883413 containerd[1559]: time="2025-05-27T18:16:20.883321728Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 27 18:16:20.884441 containerd[1559]: time="2025-05-27T18:16:20.884375529Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 27 18:16:20.884567 containerd[1559]: time="2025-05-27T18:16:20.884487099Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 27 18:16:20.887747 containerd[1559]: time="2025-05-27T18:16:20.887710890Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 27 18:16:20.887871 containerd[1559]: time="2025-05-27T18:16:20.887838271Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 27 18:16:20.887896 containerd[1559]: time="2025-05-27T18:16:20.887876951Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 27 18:16:20.887896 containerd[1559]: time="2025-05-27T18:16:20.887890931Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 27 18:16:20.887931 containerd[1559]: time="2025-05-27T18:16:20.887901401Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 27 18:16:20.887931 containerd[1559]: time="2025-05-27T18:16:20.887913571Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 27 18:16:20.887931 containerd[1559]: time="2025-05-27T18:16:20.887923451Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 27 18:16:20.888203 containerd[1559]: time="2025-05-27T18:16:20.887933671Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 27 18:16:20.888203 containerd[1559]: time="2025-05-27T18:16:20.887955981Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 27 18:16:20.888203 containerd[1559]: time="2025-05-27T18:16:20.887966581Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 27 18:16:20.888203 containerd[1559]: time="2025-05-27T18:16:20.887976061Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 27 18:16:20.890479 containerd[1559]: time="2025-05-27T18:16:20.890446772Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 18:16:20.890479 containerd[1559]: time="2025-05-27T18:16:20.890476932Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 18:16:20.890546 containerd[1559]: time="2025-05-27T18:16:20.890487272Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 18:16:20.890546 containerd[1559]: time="2025-05-27T18:16:20.890497342Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 18:16:20.890546 containerd[1559]: time="2025-05-27T18:16:20.890505032Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 27 18:16:20.890546 containerd[1559]: time="2025-05-27T18:16:20.890517162Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 27 18:16:20.890546 containerd[1559]: time="2025-05-27T18:16:20.890527332Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 27 18:16:20.892195 containerd[1559]: time="2025-05-27T18:16:20.890846532Z" level=info msg="runtime interface created" May 27 18:16:20.892195 containerd[1559]: time="2025-05-27T18:16:20.890863852Z" level=info msg="created NRI interface" May 27 18:16:20.892195 containerd[1559]: time="2025-05-27T18:16:20.890872822Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 27 18:16:20.892195 containerd[1559]: time="2025-05-27T18:16:20.890885062Z" level=info msg="Connect containerd service" May 27 18:16:20.892195 containerd[1559]: time="2025-05-27T18:16:20.890907752Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 27 18:16:20.904789 containerd[1559]: time="2025-05-27T18:16:20.904745709Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 18:16:20.940359 systemd[1]: Started systemd-hostnamed.service - Hostname Service. May 27 18:16:20.942692 dbus-daemon[1523]: [system] Successfully activated service 'org.freedesktop.hostname1' May 27 18:16:20.943800 dbus-daemon[1523]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1598 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 27 18:16:20.950044 systemd[1]: Starting polkit.service - Authorization Manager... May 27 18:16:20.953781 sshd_keygen[1543]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 27 18:16:20.977853 coreos-metadata[1597]: May 27 18:16:20.976 INFO Putting http://169.254.169.254/v1/token: Attempt #1 May 27 18:16:20.990598 kernel: EXT4-fs (sda9): resized filesystem to 20360187 May 27 18:16:20.998644 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 27 18:16:21.002116 systemd[1]: Starting issuegen.service - Generate /run/issue... May 27 18:16:21.011659 locksmithd[1574]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 27 18:16:21.013277 extend-filesystems[1566]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required May 27 18:16:21.013277 extend-filesystems[1566]: old_desc_blocks = 1, new_desc_blocks = 10 May 27 18:16:21.013277 extend-filesystems[1566]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. May 27 18:16:21.021098 extend-filesystems[1526]: Resized filesystem in /dev/sda9 May 27 18:16:21.017720 systemd[1]: extend-filesystems.service: Deactivated successfully. May 27 18:16:21.019242 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 27 18:16:21.033882 systemd[1]: issuegen.service: Deactivated successfully. May 27 18:16:21.034120 systemd[1]: Finished issuegen.service - Generate /run/issue. May 27 18:16:21.039435 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 27 18:16:21.076182 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 27 18:16:21.080764 systemd[1]: Started getty@tty1.service - Getty on tty1. May 27 18:16:21.083019 containerd[1559]: time="2025-05-27T18:16:21.082986648Z" level=info msg="Start subscribing containerd event" May 27 18:16:21.083144 containerd[1559]: time="2025-05-27T18:16:21.083114388Z" level=info msg="Start recovering state" May 27 18:16:21.085411 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 27 18:16:21.087544 containerd[1559]: time="2025-05-27T18:16:21.086841400Z" level=info msg="Start event monitor" May 27 18:16:21.087544 containerd[1559]: time="2025-05-27T18:16:21.086861430Z" level=info msg="Start cni network conf syncer for default" May 27 18:16:21.087544 containerd[1559]: time="2025-05-27T18:16:21.086870910Z" level=info msg="Start streaming server" May 27 18:16:21.087544 containerd[1559]: time="2025-05-27T18:16:21.086880910Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 27 18:16:21.087544 containerd[1559]: time="2025-05-27T18:16:21.086889330Z" level=info msg="runtime interface starting up..." May 27 18:16:21.087544 containerd[1559]: time="2025-05-27T18:16:21.086894990Z" level=info msg="starting plugins..." May 27 18:16:21.087544 containerd[1559]: time="2025-05-27T18:16:21.086910040Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 27 18:16:21.086266 systemd[1]: Reached target getty.target - Login Prompts. May 27 18:16:21.089796 coreos-metadata[1597]: May 27 18:16:21.089 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 May 27 18:16:21.094742 containerd[1559]: time="2025-05-27T18:16:21.094665704Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 27 18:16:21.094807 containerd[1559]: time="2025-05-27T18:16:21.094783694Z" level=info msg=serving... address=/run/containerd/containerd.sock May 27 18:16:21.096796 containerd[1559]: time="2025-05-27T18:16:21.094861804Z" level=info msg="containerd successfully booted in 0.264968s" May 27 18:16:21.094966 systemd[1]: Started containerd.service - containerd container runtime. May 27 18:16:21.115869 polkitd[1610]: Started polkitd version 126 May 27 18:16:21.119953 polkitd[1610]: Loading rules from directory /etc/polkit-1/rules.d May 27 18:16:21.120240 polkitd[1610]: Loading rules from directory /run/polkit-1/rules.d May 27 18:16:21.120276 polkitd[1610]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) May 27 18:16:21.120486 polkitd[1610]: Loading rules from directory /usr/local/share/polkit-1/rules.d May 27 18:16:21.120504 polkitd[1610]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) May 27 18:16:21.120540 polkitd[1610]: Loading rules from directory /usr/share/polkit-1/rules.d May 27 18:16:21.121244 polkitd[1610]: Finished loading, compiling and executing 2 rules May 27 18:16:21.121513 systemd[1]: Started polkit.service - Authorization Manager. May 27 18:16:21.121797 dbus-daemon[1523]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 27 18:16:21.122230 polkitd[1610]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 27 18:16:21.131130 systemd-hostnamed[1598]: Hostname set to <172-237-129-174> (transient) May 27 18:16:21.131444 systemd-resolved[1455]: System hostname changed to '172-237-129-174'. May 27 18:16:21.333360 coreos-metadata[1597]: May 27 18:16:21.333 INFO Fetch successful May 27 18:16:21.356078 update-ssh-keys[1652]: Updated "/home/core/.ssh/authorized_keys" May 27 18:16:21.357260 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 27 18:16:21.362380 systemd[1]: Finished sshkeys.service. May 27 18:16:21.395681 tar[1545]: linux-amd64/README.md May 27 18:16:21.415384 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 27 18:16:21.564649 coreos-metadata[1522]: May 27 18:16:21.564 INFO Putting http://169.254.169.254/v1/token: Attempt #2 May 27 18:16:21.656510 coreos-metadata[1522]: May 27 18:16:21.656 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 May 27 18:16:21.850824 coreos-metadata[1522]: May 27 18:16:21.850 INFO Fetch successful May 27 18:16:21.850890 coreos-metadata[1522]: May 27 18:16:21.850 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 May 27 18:16:22.054542 systemd-networkd[1451]: eth0: Gained IPv6LL May 27 18:16:22.055159 systemd-timesyncd[1473]: Network configuration changed, trying to establish connection. May 27 18:16:22.057496 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 27 18:16:22.058619 systemd[1]: Reached target network-online.target - Network is Online. May 27 18:16:22.062530 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 18:16:22.065350 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 27 18:16:22.091547 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 27 18:16:22.104954 coreos-metadata[1522]: May 27 18:16:22.104 INFO Fetch successful May 27 18:16:22.192604 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 27 18:16:22.193618 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 27 18:16:23.010003 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 18:16:23.010978 systemd[1]: Reached target multi-user.target - Multi-User System. May 27 18:16:23.047122 systemd[1]: Startup finished in 2.837s (kernel) + 7.035s (initrd) + 5.404s (userspace) = 15.277s. May 27 18:16:23.052543 (kubelet)[1696]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 18:16:23.555680 systemd-timesyncd[1473]: Network configuration changed, trying to establish connection. May 27 18:16:23.611756 kubelet[1696]: E0527 18:16:23.611719 1696 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 18:16:23.615307 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 18:16:23.615506 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 18:16:23.616062 systemd[1]: kubelet.service: Consumed 852ms CPU time, 265M memory peak. May 27 18:16:25.062946 systemd-timesyncd[1473]: Network configuration changed, trying to establish connection. May 27 18:16:25.083380 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 27 18:16:25.084990 systemd[1]: Started sshd@0-172.237.129.174:22-139.178.89.65:47182.service - OpenSSH per-connection server daemon (139.178.89.65:47182). May 27 18:16:25.457498 sshd[1708]: Accepted publickey for core from 139.178.89.65 port 47182 ssh2: RSA SHA256:ZIGvjpYhdi6+jKU6Ppm9MLMGwult3xuJcwOk2Crd0Zw May 27 18:16:25.459662 sshd-session[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:16:25.465629 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 27 18:16:25.467028 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 27 18:16:25.474201 systemd-logind[1534]: New session 1 of user core. May 27 18:16:25.488832 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 27 18:16:25.491973 systemd[1]: Starting user@500.service - User Manager for UID 500... May 27 18:16:25.503565 (systemd)[1712]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 27 18:16:25.505943 systemd-logind[1534]: New session c1 of user core. May 27 18:16:25.630409 systemd[1712]: Queued start job for default target default.target. May 27 18:16:25.641390 systemd[1712]: Created slice app.slice - User Application Slice. May 27 18:16:25.641417 systemd[1712]: Reached target paths.target - Paths. May 27 18:16:25.641520 systemd[1712]: Reached target timers.target - Timers. May 27 18:16:25.642869 systemd[1712]: Starting dbus.socket - D-Bus User Message Bus Socket... May 27 18:16:25.652738 systemd[1712]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 27 18:16:25.652792 systemd[1712]: Reached target sockets.target - Sockets. May 27 18:16:25.652829 systemd[1712]: Reached target basic.target - Basic System. May 27 18:16:25.652867 systemd[1712]: Reached target default.target - Main User Target. May 27 18:16:25.652897 systemd[1712]: Startup finished in 141ms. May 27 18:16:25.653018 systemd[1]: Started user@500.service - User Manager for UID 500. May 27 18:16:25.661298 systemd[1]: Started session-1.scope - Session 1 of User core. May 27 18:16:25.942434 systemd[1]: Started sshd@1-172.237.129.174:22-139.178.89.65:47184.service - OpenSSH per-connection server daemon (139.178.89.65:47184). May 27 18:16:26.301507 sshd[1723]: Accepted publickey for core from 139.178.89.65 port 47184 ssh2: RSA SHA256:ZIGvjpYhdi6+jKU6Ppm9MLMGwult3xuJcwOk2Crd0Zw May 27 18:16:26.302644 sshd-session[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:16:26.307556 systemd-logind[1534]: New session 2 of user core. May 27 18:16:26.322278 systemd[1]: Started session-2.scope - Session 2 of User core. May 27 18:16:26.562728 sshd[1725]: Connection closed by 139.178.89.65 port 47184 May 27 18:16:26.563211 sshd-session[1723]: pam_unix(sshd:session): session closed for user core May 27 18:16:26.567112 systemd[1]: sshd@1-172.237.129.174:22-139.178.89.65:47184.service: Deactivated successfully. May 27 18:16:26.569087 systemd[1]: session-2.scope: Deactivated successfully. May 27 18:16:26.569888 systemd-logind[1534]: Session 2 logged out. Waiting for processes to exit. May 27 18:16:26.570952 systemd-logind[1534]: Removed session 2. May 27 18:16:26.627118 systemd[1]: Started sshd@2-172.237.129.174:22-139.178.89.65:47200.service - OpenSSH per-connection server daemon (139.178.89.65:47200). May 27 18:16:26.989775 sshd[1731]: Accepted publickey for core from 139.178.89.65 port 47200 ssh2: RSA SHA256:ZIGvjpYhdi6+jKU6Ppm9MLMGwult3xuJcwOk2Crd0Zw May 27 18:16:26.991098 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:16:26.995515 systemd-logind[1534]: New session 3 of user core. May 27 18:16:26.999256 systemd[1]: Started session-3.scope - Session 3 of User core. May 27 18:16:27.249280 sshd[1733]: Connection closed by 139.178.89.65 port 47200 May 27 18:16:27.249258 sshd-session[1731]: pam_unix(sshd:session): session closed for user core May 27 18:16:27.255370 systemd[1]: sshd@2-172.237.129.174:22-139.178.89.65:47200.service: Deactivated successfully. May 27 18:16:27.260213 systemd[1]: session-3.scope: Deactivated successfully. May 27 18:16:27.262765 systemd-logind[1534]: Session 3 logged out. Waiting for processes to exit. May 27 18:16:27.264092 systemd-logind[1534]: Removed session 3. May 27 18:16:27.312929 systemd[1]: Started sshd@3-172.237.129.174:22-139.178.89.65:47214.service - OpenSSH per-connection server daemon (139.178.89.65:47214). May 27 18:16:27.670786 sshd[1739]: Accepted publickey for core from 139.178.89.65 port 47214 ssh2: RSA SHA256:ZIGvjpYhdi6+jKU6Ppm9MLMGwult3xuJcwOk2Crd0Zw May 27 18:16:27.672326 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:16:27.676784 systemd-logind[1534]: New session 4 of user core. May 27 18:16:27.683276 systemd[1]: Started session-4.scope - Session 4 of User core. May 27 18:16:27.930567 sshd[1741]: Connection closed by 139.178.89.65 port 47214 May 27 18:16:27.931764 sshd-session[1739]: pam_unix(sshd:session): session closed for user core May 27 18:16:27.937133 systemd[1]: sshd@3-172.237.129.174:22-139.178.89.65:47214.service: Deactivated successfully. May 27 18:16:27.940025 systemd[1]: session-4.scope: Deactivated successfully. May 27 18:16:27.940749 systemd-logind[1534]: Session 4 logged out. Waiting for processes to exit. May 27 18:16:27.942146 systemd-logind[1534]: Removed session 4. May 27 18:16:27.996241 systemd[1]: Started sshd@4-172.237.129.174:22-139.178.89.65:47222.service - OpenSSH per-connection server daemon (139.178.89.65:47222). May 27 18:16:28.362357 sshd[1747]: Accepted publickey for core from 139.178.89.65 port 47222 ssh2: RSA SHA256:ZIGvjpYhdi6+jKU6Ppm9MLMGwult3xuJcwOk2Crd0Zw May 27 18:16:28.364315 sshd-session[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:16:28.372061 systemd-logind[1534]: New session 5 of user core. May 27 18:16:28.381314 systemd[1]: Started session-5.scope - Session 5 of User core. May 27 18:16:28.577136 sudo[1750]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 27 18:16:28.577480 sudo[1750]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 18:16:28.588698 sudo[1750]: pam_unix(sudo:session): session closed for user root May 27 18:16:28.643445 sshd[1749]: Connection closed by 139.178.89.65 port 47222 May 27 18:16:28.644241 sshd-session[1747]: pam_unix(sshd:session): session closed for user core May 27 18:16:28.647949 systemd-logind[1534]: Session 5 logged out. Waiting for processes to exit. May 27 18:16:28.648560 systemd[1]: sshd@4-172.237.129.174:22-139.178.89.65:47222.service: Deactivated successfully. May 27 18:16:28.650346 systemd[1]: session-5.scope: Deactivated successfully. May 27 18:16:28.651670 systemd-logind[1534]: Removed session 5. May 27 18:16:28.711073 systemd[1]: Started sshd@5-172.237.129.174:22-139.178.89.65:47236.service - OpenSSH per-connection server daemon (139.178.89.65:47236). May 27 18:16:29.078417 sshd[1756]: Accepted publickey for core from 139.178.89.65 port 47236 ssh2: RSA SHA256:ZIGvjpYhdi6+jKU6Ppm9MLMGwult3xuJcwOk2Crd0Zw May 27 18:16:29.080388 sshd-session[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:16:29.087377 systemd-logind[1534]: New session 6 of user core. May 27 18:16:29.093269 systemd[1]: Started session-6.scope - Session 6 of User core. May 27 18:16:29.289538 sudo[1760]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 27 18:16:29.289853 sudo[1760]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 18:16:29.293379 sudo[1760]: pam_unix(sudo:session): session closed for user root May 27 18:16:29.297736 sudo[1759]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 27 18:16:29.298009 sudo[1759]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 18:16:29.305802 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 18:16:29.349236 augenrules[1782]: No rules May 27 18:16:29.350549 systemd[1]: audit-rules.service: Deactivated successfully. May 27 18:16:29.350786 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 18:16:29.351563 sudo[1759]: pam_unix(sudo:session): session closed for user root May 27 18:16:29.406590 sshd[1758]: Connection closed by 139.178.89.65 port 47236 May 27 18:16:29.407091 sshd-session[1756]: pam_unix(sshd:session): session closed for user core May 27 18:16:29.410145 systemd-logind[1534]: Session 6 logged out. Waiting for processes to exit. May 27 18:16:29.410669 systemd[1]: sshd@5-172.237.129.174:22-139.178.89.65:47236.service: Deactivated successfully. May 27 18:16:29.412348 systemd[1]: session-6.scope: Deactivated successfully. May 27 18:16:29.413618 systemd-logind[1534]: Removed session 6. May 27 18:16:29.469738 systemd[1]: Started sshd@6-172.237.129.174:22-139.178.89.65:47248.service - OpenSSH per-connection server daemon (139.178.89.65:47248). May 27 18:16:29.827048 sshd[1791]: Accepted publickey for core from 139.178.89.65 port 47248 ssh2: RSA SHA256:ZIGvjpYhdi6+jKU6Ppm9MLMGwult3xuJcwOk2Crd0Zw May 27 18:16:29.828152 sshd-session[1791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:16:29.832529 systemd-logind[1534]: New session 7 of user core. May 27 18:16:29.835271 systemd[1]: Started session-7.scope - Session 7 of User core. May 27 18:16:30.033801 sudo[1794]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 27 18:16:30.034048 sudo[1794]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 18:16:30.270554 systemd[1]: Starting docker.service - Docker Application Container Engine... May 27 18:16:30.283418 (dockerd)[1813]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 27 18:16:30.440549 dockerd[1813]: time="2025-05-27T18:16:30.440495674Z" level=info msg="Starting up" May 27 18:16:30.442558 dockerd[1813]: time="2025-05-27T18:16:30.442211374Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 27 18:16:30.476143 systemd[1]: var-lib-docker-metacopy\x2dcheck605241261-merged.mount: Deactivated successfully. May 27 18:16:30.497520 dockerd[1813]: time="2025-05-27T18:16:30.497485932Z" level=info msg="Loading containers: start." May 27 18:16:30.507194 kernel: Initializing XFRM netlink socket May 27 18:16:30.669295 systemd-timesyncd[1473]: Network configuration changed, trying to establish connection. May 27 18:16:30.706732 systemd-networkd[1451]: docker0: Link UP May 27 18:16:30.710899 dockerd[1813]: time="2025-05-27T18:16:30.710867109Z" level=info msg="Loading containers: done." May 27 18:16:30.722664 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck370204474-merged.mount: Deactivated successfully. May 27 18:16:30.724412 dockerd[1813]: time="2025-05-27T18:16:30.723822275Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 27 18:16:30.724627 dockerd[1813]: time="2025-05-27T18:16:30.724607126Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 27 18:16:30.724798 dockerd[1813]: time="2025-05-27T18:16:30.724782756Z" level=info msg="Initializing buildkit" May 27 18:16:30.743404 dockerd[1813]: time="2025-05-27T18:16:30.743385905Z" level=info msg="Completed buildkit initialization" May 27 18:16:30.749465 dockerd[1813]: time="2025-05-27T18:16:30.749446258Z" level=info msg="Daemon has completed initialization" May 27 18:16:30.749631 systemd[1]: Started docker.service - Docker Application Container Engine. May 27 18:16:30.750256 dockerd[1813]: time="2025-05-27T18:16:30.750022338Z" level=info msg="API listen on /run/docker.sock" May 27 18:16:31.364591 systemd-timesyncd[1473]: Contacted time server [2600:1700:5455:a70::7b:2]:123 (2.flatcar.pool.ntp.org). May 27 18:16:31.364818 systemd-resolved[1455]: Clock change detected. Flushing caches. May 27 18:16:31.365390 systemd-timesyncd[1473]: Initial clock synchronization to Tue 2025-05-27 18:16:31.364378 UTC. May 27 18:16:31.721594 containerd[1559]: time="2025-05-27T18:16:31.721481929Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\"" May 27 18:16:32.576125 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1719510140.mount: Deactivated successfully. May 27 18:16:32.752314 systemd[1]: Started sshd@7-172.237.129.174:22-181.129.31.42:39873.service - OpenSSH per-connection server daemon (181.129.31.42:39873). May 27 18:16:32.929379 sshd[2028]: Connection closed by 181.129.31.42 port 39873 May 27 18:16:32.931111 systemd[1]: sshd@7-172.237.129.174:22-181.129.31.42:39873.service: Deactivated successfully. May 27 18:16:34.237449 containerd[1559]: time="2025-05-27T18:16:34.237398166Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:16:34.238275 containerd[1559]: time="2025-05-27T18:16:34.238211446Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.5: active requests=0, bytes read=28797811" May 27 18:16:34.239740 containerd[1559]: time="2025-05-27T18:16:34.238841747Z" level=info msg="ImageCreate event name:\"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:16:34.240732 containerd[1559]: time="2025-05-27T18:16:34.240711418Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:16:34.241377 containerd[1559]: time="2025-05-27T18:16:34.241356628Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.5\" with image id \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\", size \"28794611\" in 2.519839059s" May 27 18:16:34.241462 containerd[1559]: time="2025-05-27T18:16:34.241446958Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\" returns image reference \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\"" May 27 18:16:34.242010 containerd[1559]: time="2025-05-27T18:16:34.241963698Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\"" May 27 18:16:34.285813 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 27 18:16:34.287447 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 18:16:34.460245 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 18:16:34.465483 (kubelet)[2079]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 18:16:34.501248 kubelet[2079]: E0527 18:16:34.501133 2079 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 18:16:34.505798 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 18:16:34.505998 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 18:16:34.506563 systemd[1]: kubelet.service: Consumed 181ms CPU time, 108.5M memory peak. May 27 18:16:36.378638 containerd[1559]: time="2025-05-27T18:16:36.378544786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:16:36.381086 containerd[1559]: time="2025-05-27T18:16:36.381034267Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.5: active requests=0, bytes read=24782523" May 27 18:16:36.381666 containerd[1559]: time="2025-05-27T18:16:36.381623467Z" level=info msg="ImageCreate event name:\"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:16:36.384430 containerd[1559]: time="2025-05-27T18:16:36.384390649Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:16:36.385105 containerd[1559]: time="2025-05-27T18:16:36.385065989Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.5\" with image id \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\", size \"26384363\" in 2.143056971s" May 27 18:16:36.385141 containerd[1559]: time="2025-05-27T18:16:36.385107789Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\" returns image reference \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\"" May 27 18:16:36.386444 containerd[1559]: time="2025-05-27T18:16:36.386404140Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\"" May 27 18:16:38.088320 containerd[1559]: time="2025-05-27T18:16:38.087536660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:16:38.089044 containerd[1559]: time="2025-05-27T18:16:38.088643810Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.5: active requests=0, bytes read=19176063" May 27 18:16:38.089117 containerd[1559]: time="2025-05-27T18:16:38.089096200Z" level=info msg="ImageCreate event name:\"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:16:38.091531 containerd[1559]: time="2025-05-27T18:16:38.091494782Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:16:38.092465 containerd[1559]: time="2025-05-27T18:16:38.092253982Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.5\" with image id \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\", size \"20777921\" in 1.705822792s" May 27 18:16:38.092465 containerd[1559]: time="2025-05-27T18:16:38.092279262Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\" returns image reference \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\"" May 27 18:16:38.092885 containerd[1559]: time="2025-05-27T18:16:38.092864872Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\"" May 27 18:16:39.585793 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1512984522.mount: Deactivated successfully. May 27 18:16:39.920557 containerd[1559]: time="2025-05-27T18:16:39.920392115Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:16:39.921961 containerd[1559]: time="2025-05-27T18:16:39.921857746Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.5: active requests=0, bytes read=30892872" May 27 18:16:39.922769 containerd[1559]: time="2025-05-27T18:16:39.922730887Z" level=info msg="ImageCreate event name:\"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:16:39.924674 containerd[1559]: time="2025-05-27T18:16:39.924256407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:16:39.924838 containerd[1559]: time="2025-05-27T18:16:39.924802018Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.5\" with image id \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\", repo tag \"registry.k8s.io/kube-proxy:v1.32.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\", size \"30891891\" in 1.831912976s" May 27 18:16:39.925002 containerd[1559]: time="2025-05-27T18:16:39.924941188Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\" returns image reference \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\"" May 27 18:16:39.925544 containerd[1559]: time="2025-05-27T18:16:39.925501458Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 27 18:16:40.658112 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount896611596.mount: Deactivated successfully. May 27 18:16:41.307221 containerd[1559]: time="2025-05-27T18:16:41.307163038Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:16:41.308467 containerd[1559]: time="2025-05-27T18:16:41.308242169Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 27 18:16:41.309256 containerd[1559]: time="2025-05-27T18:16:41.309216919Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:16:41.311523 containerd[1559]: time="2025-05-27T18:16:41.311500171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:16:41.312381 containerd[1559]: time="2025-05-27T18:16:41.312360961Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.386825183s" May 27 18:16:41.312470 containerd[1559]: time="2025-05-27T18:16:41.312454931Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 27 18:16:41.313576 containerd[1559]: time="2025-05-27T18:16:41.313510622Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 27 18:16:41.929759 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1981100507.mount: Deactivated successfully. May 27 18:16:41.934378 containerd[1559]: time="2025-05-27T18:16:41.934341532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 18:16:41.935260 containerd[1559]: time="2025-05-27T18:16:41.935237602Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 27 18:16:41.936233 containerd[1559]: time="2025-05-27T18:16:41.935814992Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 18:16:41.937790 containerd[1559]: time="2025-05-27T18:16:41.937763103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 18:16:41.938626 containerd[1559]: time="2025-05-27T18:16:41.938602524Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 624.947232ms" May 27 18:16:41.938710 containerd[1559]: time="2025-05-27T18:16:41.938694294Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 27 18:16:41.939382 containerd[1559]: time="2025-05-27T18:16:41.939361694Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 27 18:16:42.809003 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount660675450.mount: Deactivated successfully. May 27 18:16:44.514617 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 27 18:16:44.517351 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 18:16:44.689067 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 18:16:44.702357 (kubelet)[2218]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 18:16:44.739420 kubelet[2218]: E0527 18:16:44.739347 2218 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 18:16:44.742252 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 18:16:44.742412 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 18:16:44.742819 systemd[1]: kubelet.service: Consumed 170ms CPU time, 110.6M memory peak. May 27 18:16:44.874107 containerd[1559]: time="2025-05-27T18:16:44.873197620Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:16:44.874107 containerd[1559]: time="2025-05-27T18:16:44.873994101Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" May 27 18:16:44.874856 containerd[1559]: time="2025-05-27T18:16:44.874795621Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:16:44.877419 containerd[1559]: time="2025-05-27T18:16:44.877384662Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:16:44.879058 containerd[1559]: time="2025-05-27T18:16:44.878427363Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.939040189s" May 27 18:16:44.879058 containerd[1559]: time="2025-05-27T18:16:44.878457693Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 27 18:16:47.001429 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 18:16:47.002010 systemd[1]: kubelet.service: Consumed 170ms CPU time, 110.6M memory peak. May 27 18:16:47.004119 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 18:16:47.031993 systemd[1]: Reload requested from client PID 2255 ('systemctl') (unit session-7.scope)... May 27 18:16:47.032083 systemd[1]: Reloading... May 27 18:16:47.247103 zram_generator::config[2302]: No configuration found. May 27 18:16:47.343795 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 18:16:47.444705 systemd[1]: Reloading finished in 412 ms. May 27 18:16:47.501589 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 27 18:16:47.501692 systemd[1]: kubelet.service: Failed with result 'signal'. May 27 18:16:47.502044 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 18:16:47.502115 systemd[1]: kubelet.service: Consumed 140ms CPU time, 98.3M memory peak. May 27 18:16:47.504234 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 18:16:47.700929 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 18:16:47.708452 (kubelet)[2353]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 18:16:47.747970 kubelet[2353]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 18:16:47.748294 kubelet[2353]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 18:16:47.748344 kubelet[2353]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 18:16:47.748470 kubelet[2353]: I0527 18:16:47.748441 2353 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 18:16:47.935431 kubelet[2353]: I0527 18:16:47.935377 2353 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 27 18:16:47.935431 kubelet[2353]: I0527 18:16:47.935412 2353 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 18:16:47.935730 kubelet[2353]: I0527 18:16:47.935706 2353 server.go:954] "Client rotation is on, will bootstrap in background" May 27 18:16:47.968442 kubelet[2353]: E0527 18:16:47.968347 2353 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.237.129.174:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.237.129.174:6443: connect: connection refused" logger="UnhandledError" May 27 18:16:47.970291 kubelet[2353]: I0527 18:16:47.970173 2353 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 18:16:47.980001 kubelet[2353]: I0527 18:16:47.979965 2353 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 18:16:47.983597 kubelet[2353]: I0527 18:16:47.983582 2353 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 18:16:47.985054 kubelet[2353]: I0527 18:16:47.985004 2353 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 18:16:47.985199 kubelet[2353]: I0527 18:16:47.985034 2353 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-237-129-174","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 18:16:47.985291 kubelet[2353]: I0527 18:16:47.985205 2353 topology_manager.go:138] "Creating topology manager with none policy" May 27 18:16:47.985291 kubelet[2353]: I0527 18:16:47.985214 2353 container_manager_linux.go:304] "Creating device plugin manager" May 27 18:16:47.985343 kubelet[2353]: I0527 18:16:47.985330 2353 state_mem.go:36] "Initialized new in-memory state store" May 27 18:16:47.989063 kubelet[2353]: I0527 18:16:47.988957 2353 kubelet.go:446] "Attempting to sync node with API server" May 27 18:16:47.989063 kubelet[2353]: I0527 18:16:47.989009 2353 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 18:16:47.989063 kubelet[2353]: I0527 18:16:47.989032 2353 kubelet.go:352] "Adding apiserver pod source" May 27 18:16:47.989063 kubelet[2353]: I0527 18:16:47.989042 2353 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 18:16:47.994555 kubelet[2353]: W0527 18:16:47.993881 2353 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.237.129.174:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-237-129-174&limit=500&resourceVersion=0": dial tcp 172.237.129.174:6443: connect: connection refused May 27 18:16:47.994555 kubelet[2353]: E0527 18:16:47.993946 2353 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.237.129.174:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-237-129-174&limit=500&resourceVersion=0\": dial tcp 172.237.129.174:6443: connect: connection refused" logger="UnhandledError" May 27 18:16:47.994555 kubelet[2353]: W0527 18:16:47.994251 2353 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.237.129.174:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.237.129.174:6443: connect: connection refused May 27 18:16:47.994555 kubelet[2353]: E0527 18:16:47.994498 2353 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.237.129.174:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.237.129.174:6443: connect: connection refused" logger="UnhandledError" May 27 18:16:47.994851 kubelet[2353]: I0527 18:16:47.994816 2353 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 18:16:47.995696 kubelet[2353]: I0527 18:16:47.995143 2353 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 27 18:16:47.995696 kubelet[2353]: W0527 18:16:47.995205 2353 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 27 18:16:47.999176 kubelet[2353]: I0527 18:16:47.999096 2353 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 18:16:47.999176 kubelet[2353]: I0527 18:16:47.999148 2353 server.go:1287] "Started kubelet" May 27 18:16:48.001386 kubelet[2353]: I0527 18:16:48.001370 2353 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 18:16:48.004936 kubelet[2353]: I0527 18:16:48.004900 2353 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 27 18:16:48.005340 kubelet[2353]: I0527 18:16:48.005224 2353 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 18:16:48.005641 kubelet[2353]: I0527 18:16:48.005611 2353 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 18:16:48.005962 kubelet[2353]: I0527 18:16:48.005949 2353 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 18:16:48.008104 kubelet[2353]: I0527 18:16:48.008093 2353 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 18:16:48.008432 kubelet[2353]: E0527 18:16:48.008395 2353 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-237-129-174\" not found" May 27 18:16:48.009084 kubelet[2353]: I0527 18:16:48.009014 2353 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 18:16:48.009160 kubelet[2353]: I0527 18:16:48.009058 2353 reconciler.go:26] "Reconciler: start to sync state" May 27 18:16:48.010922 kubelet[2353]: E0527 18:16:48.010843 2353 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.237.129.174:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-237-129-174?timeout=10s\": dial tcp 172.237.129.174:6443: connect: connection refused" interval="200ms" May 27 18:16:48.013326 kubelet[2353]: I0527 18:16:48.013311 2353 server.go:479] "Adding debug handlers to kubelet server" May 27 18:16:48.014883 kubelet[2353]: I0527 18:16:48.014578 2353 factory.go:221] Registration of the systemd container factory successfully May 27 18:16:48.014883 kubelet[2353]: E0527 18:16:48.011172 2353 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.237.129.174:6443/api/v1/namespaces/default/events\": dial tcp 172.237.129.174:6443: connect: connection refused" event="&Event{ObjectMeta:{172-237-129-174.18437513bdce1842 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-237-129-174,UID:172-237-129-174,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-237-129-174,},FirstTimestamp:2025-05-27 18:16:47.999113282 +0000 UTC m=+0.286576824,LastTimestamp:2025-05-27 18:16:47.999113282 +0000 UTC m=+0.286576824,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-237-129-174,}" May 27 18:16:48.014883 kubelet[2353]: I0527 18:16:48.014653 2353 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 18:16:48.014883 kubelet[2353]: W0527 18:16:48.014678 2353 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.237.129.174:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.237.129.174:6443: connect: connection refused May 27 18:16:48.014883 kubelet[2353]: E0527 18:16:48.014733 2353 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.237.129.174:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.237.129.174:6443: connect: connection refused" logger="UnhandledError" May 27 18:16:48.020146 kubelet[2353]: E0527 18:16:48.020128 2353 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 18:16:48.022183 kubelet[2353]: I0527 18:16:48.020359 2353 factory.go:221] Registration of the containerd container factory successfully May 27 18:16:48.033484 kubelet[2353]: I0527 18:16:48.033375 2353 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 27 18:16:48.034658 kubelet[2353]: I0527 18:16:48.034643 2353 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 27 18:16:48.034726 kubelet[2353]: I0527 18:16:48.034717 2353 status_manager.go:227] "Starting to sync pod status with apiserver" May 27 18:16:48.034796 kubelet[2353]: I0527 18:16:48.034785 2353 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 18:16:48.034839 kubelet[2353]: I0527 18:16:48.034832 2353 kubelet.go:2382] "Starting kubelet main sync loop" May 27 18:16:48.034931 kubelet[2353]: E0527 18:16:48.034917 2353 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 18:16:48.043450 kubelet[2353]: W0527 18:16:48.043378 2353 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.237.129.174:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.237.129.174:6443: connect: connection refused May 27 18:16:48.043557 kubelet[2353]: E0527 18:16:48.043541 2353 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.237.129.174:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.237.129.174:6443: connect: connection refused" logger="UnhandledError" May 27 18:16:48.048523 kubelet[2353]: I0527 18:16:48.048505 2353 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 18:16:48.048523 kubelet[2353]: I0527 18:16:48.048519 2353 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 18:16:48.048603 kubelet[2353]: I0527 18:16:48.048533 2353 state_mem.go:36] "Initialized new in-memory state store" May 27 18:16:48.050810 kubelet[2353]: I0527 18:16:48.050792 2353 policy_none.go:49] "None policy: Start" May 27 18:16:48.050810 kubelet[2353]: I0527 18:16:48.050812 2353 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 18:16:48.050810 kubelet[2353]: I0527 18:16:48.050823 2353 state_mem.go:35] "Initializing new in-memory state store" May 27 18:16:48.056349 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 27 18:16:48.072587 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 27 18:16:48.076434 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 27 18:16:48.087799 kubelet[2353]: I0527 18:16:48.087756 2353 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 27 18:16:48.087966 kubelet[2353]: I0527 18:16:48.087937 2353 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 18:16:48.088026 kubelet[2353]: I0527 18:16:48.087957 2353 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 18:16:48.088552 kubelet[2353]: I0527 18:16:48.088469 2353 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 18:16:48.089701 kubelet[2353]: E0527 18:16:48.089444 2353 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 18:16:48.090081 kubelet[2353]: E0527 18:16:48.090053 2353 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-237-129-174\" not found" May 27 18:16:48.145401 systemd[1]: Created slice kubepods-burstable-pod4905c8870ccf4c8d7957a8ee2ad16fff.slice - libcontainer container kubepods-burstable-pod4905c8870ccf4c8d7957a8ee2ad16fff.slice. May 27 18:16:48.163464 kubelet[2353]: E0527 18:16:48.163189 2353 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-129-174\" not found" node="172-237-129-174" May 27 18:16:48.166204 systemd[1]: Created slice kubepods-burstable-poddc4320da06d65f75ee8363be7423aa69.slice - libcontainer container kubepods-burstable-poddc4320da06d65f75ee8363be7423aa69.slice. May 27 18:16:48.177259 kubelet[2353]: E0527 18:16:48.177192 2353 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-129-174\" not found" node="172-237-129-174" May 27 18:16:48.179823 systemd[1]: Created slice kubepods-burstable-pod658b806bc4c78043f7741bcf958cb082.slice - libcontainer container kubepods-burstable-pod658b806bc4c78043f7741bcf958cb082.slice. May 27 18:16:48.181766 kubelet[2353]: E0527 18:16:48.181727 2353 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-129-174\" not found" node="172-237-129-174" May 27 18:16:48.189896 kubelet[2353]: I0527 18:16:48.189873 2353 kubelet_node_status.go:75] "Attempting to register node" node="172-237-129-174" May 27 18:16:48.190237 kubelet[2353]: E0527 18:16:48.190214 2353 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.237.129.174:6443/api/v1/nodes\": dial tcp 172.237.129.174:6443: connect: connection refused" node="172-237-129-174" May 27 18:16:48.210490 kubelet[2353]: I0527 18:16:48.210462 2353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/658b806bc4c78043f7741bcf958cb082-ca-certs\") pod \"kube-controller-manager-172-237-129-174\" (UID: \"658b806bc4c78043f7741bcf958cb082\") " pod="kube-system/kube-controller-manager-172-237-129-174" May 27 18:16:48.210647 kubelet[2353]: I0527 18:16:48.210492 2353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/658b806bc4c78043f7741bcf958cb082-flexvolume-dir\") pod \"kube-controller-manager-172-237-129-174\" (UID: \"658b806bc4c78043f7741bcf958cb082\") " pod="kube-system/kube-controller-manager-172-237-129-174" May 27 18:16:48.210647 kubelet[2353]: I0527 18:16:48.210511 2353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/658b806bc4c78043f7741bcf958cb082-kubeconfig\") pod \"kube-controller-manager-172-237-129-174\" (UID: \"658b806bc4c78043f7741bcf958cb082\") " pod="kube-system/kube-controller-manager-172-237-129-174" May 27 18:16:48.210647 kubelet[2353]: I0527 18:16:48.210525 2353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dc4320da06d65f75ee8363be7423aa69-k8s-certs\") pod \"kube-apiserver-172-237-129-174\" (UID: \"dc4320da06d65f75ee8363be7423aa69\") " pod="kube-system/kube-apiserver-172-237-129-174" May 27 18:16:48.210647 kubelet[2353]: I0527 18:16:48.210541 2353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dc4320da06d65f75ee8363be7423aa69-usr-share-ca-certificates\") pod \"kube-apiserver-172-237-129-174\" (UID: \"dc4320da06d65f75ee8363be7423aa69\") " pod="kube-system/kube-apiserver-172-237-129-174" May 27 18:16:48.210647 kubelet[2353]: I0527 18:16:48.210557 2353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/658b806bc4c78043f7741bcf958cb082-k8s-certs\") pod \"kube-controller-manager-172-237-129-174\" (UID: \"658b806bc4c78043f7741bcf958cb082\") " pod="kube-system/kube-controller-manager-172-237-129-174" May 27 18:16:48.210756 kubelet[2353]: I0527 18:16:48.210574 2353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/658b806bc4c78043f7741bcf958cb082-usr-share-ca-certificates\") pod \"kube-controller-manager-172-237-129-174\" (UID: \"658b806bc4c78043f7741bcf958cb082\") " pod="kube-system/kube-controller-manager-172-237-129-174" May 27 18:16:48.210756 kubelet[2353]: I0527 18:16:48.210591 2353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4905c8870ccf4c8d7957a8ee2ad16fff-kubeconfig\") pod \"kube-scheduler-172-237-129-174\" (UID: \"4905c8870ccf4c8d7957a8ee2ad16fff\") " pod="kube-system/kube-scheduler-172-237-129-174" May 27 18:16:48.210756 kubelet[2353]: I0527 18:16:48.210606 2353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dc4320da06d65f75ee8363be7423aa69-ca-certs\") pod \"kube-apiserver-172-237-129-174\" (UID: \"dc4320da06d65f75ee8363be7423aa69\") " pod="kube-system/kube-apiserver-172-237-129-174" May 27 18:16:48.211815 kubelet[2353]: E0527 18:16:48.211765 2353 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.237.129.174:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-237-129-174?timeout=10s\": dial tcp 172.237.129.174:6443: connect: connection refused" interval="400ms" May 27 18:16:48.393486 kubelet[2353]: I0527 18:16:48.393446 2353 kubelet_node_status.go:75] "Attempting to register node" node="172-237-129-174" May 27 18:16:48.393894 kubelet[2353]: E0527 18:16:48.393858 2353 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.237.129.174:6443/api/v1/nodes\": dial tcp 172.237.129.174:6443: connect: connection refused" node="172-237-129-174" May 27 18:16:48.463767 kubelet[2353]: E0527 18:16:48.463718 2353 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:16:48.464752 containerd[1559]: time="2025-05-27T18:16:48.464688615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-237-129-174,Uid:4905c8870ccf4c8d7957a8ee2ad16fff,Namespace:kube-system,Attempt:0,}" May 27 18:16:48.478117 kubelet[2353]: E0527 18:16:48.478042 2353 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:16:48.479002 containerd[1559]: time="2025-05-27T18:16:48.478488392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-237-129-174,Uid:dc4320da06d65f75ee8363be7423aa69,Namespace:kube-system,Attempt:0,}" May 27 18:16:48.482128 kubelet[2353]: E0527 18:16:48.482096 2353 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:16:48.483323 containerd[1559]: time="2025-05-27T18:16:48.483293164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-237-129-174,Uid:658b806bc4c78043f7741bcf958cb082,Namespace:kube-system,Attempt:0,}" May 27 18:16:48.488433 containerd[1559]: time="2025-05-27T18:16:48.488412187Z" level=info msg="connecting to shim 9d48a8ef96daada1dab5685c4ddfb1ad02ef1d5654ec09001c56b918be886a60" address="unix:///run/containerd/s/49fb229a66cea1c095c71489d8823ba6addc9ad1293a0e0b75ccd48ae36b7cee" namespace=k8s.io protocol=ttrpc version=3 May 27 18:16:48.515514 containerd[1559]: time="2025-05-27T18:16:48.515485990Z" level=info msg="connecting to shim 52aeeb4dad60de0d23bfc24eadc608ea99e85f13575ce7a53b4db314a3d6dd08" address="unix:///run/containerd/s/fc4d852f6d49281762a8bf063ca842a2fd809f12591c9cb849c608c99c7543f9" namespace=k8s.io protocol=ttrpc version=3 May 27 18:16:48.524352 containerd[1559]: time="2025-05-27T18:16:48.524136354Z" level=info msg="connecting to shim 321ca40fb76e1e32f76bd132d9a3c934bfb53bdef30bde855c8e2ff7da5a1037" address="unix:///run/containerd/s/cd5172559183b873365bf20e39400400c6d605ed93117e2a2d6a71f65bbe2550" namespace=k8s.io protocol=ttrpc version=3 May 27 18:16:48.527475 systemd[1]: Started cri-containerd-9d48a8ef96daada1dab5685c4ddfb1ad02ef1d5654ec09001c56b918be886a60.scope - libcontainer container 9d48a8ef96daada1dab5685c4ddfb1ad02ef1d5654ec09001c56b918be886a60. May 27 18:16:48.558244 systemd[1]: Started cri-containerd-52aeeb4dad60de0d23bfc24eadc608ea99e85f13575ce7a53b4db314a3d6dd08.scope - libcontainer container 52aeeb4dad60de0d23bfc24eadc608ea99e85f13575ce7a53b4db314a3d6dd08. May 27 18:16:48.565840 systemd[1]: Started cri-containerd-321ca40fb76e1e32f76bd132d9a3c934bfb53bdef30bde855c8e2ff7da5a1037.scope - libcontainer container 321ca40fb76e1e32f76bd132d9a3c934bfb53bdef30bde855c8e2ff7da5a1037. May 27 18:16:48.609702 containerd[1559]: time="2025-05-27T18:16:48.609657497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-237-129-174,Uid:4905c8870ccf4c8d7957a8ee2ad16fff,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d48a8ef96daada1dab5685c4ddfb1ad02ef1d5654ec09001c56b918be886a60\"" May 27 18:16:48.612910 kubelet[2353]: E0527 18:16:48.612441 2353 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:16:48.613879 kubelet[2353]: E0527 18:16:48.613858 2353 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.237.129.174:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-237-129-174?timeout=10s\": dial tcp 172.237.129.174:6443: connect: connection refused" interval="800ms" May 27 18:16:48.615842 containerd[1559]: time="2025-05-27T18:16:48.615789700Z" level=info msg="CreateContainer within sandbox \"9d48a8ef96daada1dab5685c4ddfb1ad02ef1d5654ec09001c56b918be886a60\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 27 18:16:48.629193 containerd[1559]: time="2025-05-27T18:16:48.629140267Z" level=info msg="Container adf12e480b988d869cbd38ca23877338a93399c0814d07e5068993693f77b5b6: CDI devices from CRI Config.CDIDevices: []" May 27 18:16:48.635308 containerd[1559]: time="2025-05-27T18:16:48.635285650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-237-129-174,Uid:dc4320da06d65f75ee8363be7423aa69,Namespace:kube-system,Attempt:0,} returns sandbox id \"52aeeb4dad60de0d23bfc24eadc608ea99e85f13575ce7a53b4db314a3d6dd08\"" May 27 18:16:48.636994 kubelet[2353]: E0527 18:16:48.636897 2353 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:16:48.639137 containerd[1559]: time="2025-05-27T18:16:48.639116742Z" level=info msg="CreateContainer within sandbox \"9d48a8ef96daada1dab5685c4ddfb1ad02ef1d5654ec09001c56b918be886a60\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"adf12e480b988d869cbd38ca23877338a93399c0814d07e5068993693f77b5b6\"" May 27 18:16:48.640588 containerd[1559]: time="2025-05-27T18:16:48.639809532Z" level=info msg="CreateContainer within sandbox \"52aeeb4dad60de0d23bfc24eadc608ea99e85f13575ce7a53b4db314a3d6dd08\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 27 18:16:48.644846 containerd[1559]: time="2025-05-27T18:16:48.644464815Z" level=info msg="Container 4f6fd8148a4fa8c8d2dead2cdf9142ad9ba184e5dd524b5f194827abbc351013: CDI devices from CRI Config.CDIDevices: []" May 27 18:16:48.658864 containerd[1559]: time="2025-05-27T18:16:48.658818642Z" level=info msg="StartContainer for \"adf12e480b988d869cbd38ca23877338a93399c0814d07e5068993693f77b5b6\"" May 27 18:16:48.660386 containerd[1559]: time="2025-05-27T18:16:48.660351082Z" level=info msg="connecting to shim adf12e480b988d869cbd38ca23877338a93399c0814d07e5068993693f77b5b6" address="unix:///run/containerd/s/49fb229a66cea1c095c71489d8823ba6addc9ad1293a0e0b75ccd48ae36b7cee" protocol=ttrpc version=3 May 27 18:16:48.663554 containerd[1559]: time="2025-05-27T18:16:48.663532484Z" level=info msg="CreateContainer within sandbox \"52aeeb4dad60de0d23bfc24eadc608ea99e85f13575ce7a53b4db314a3d6dd08\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4f6fd8148a4fa8c8d2dead2cdf9142ad9ba184e5dd524b5f194827abbc351013\"" May 27 18:16:48.663713 containerd[1559]: time="2025-05-27T18:16:48.663629024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-237-129-174,Uid:658b806bc4c78043f7741bcf958cb082,Namespace:kube-system,Attempt:0,} returns sandbox id \"321ca40fb76e1e32f76bd132d9a3c934bfb53bdef30bde855c8e2ff7da5a1037\"" May 27 18:16:48.664109 containerd[1559]: time="2025-05-27T18:16:48.664094764Z" level=info msg="StartContainer for \"4f6fd8148a4fa8c8d2dead2cdf9142ad9ba184e5dd524b5f194827abbc351013\"" May 27 18:16:48.664503 kubelet[2353]: E0527 18:16:48.664489 2353 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:16:48.665380 containerd[1559]: time="2025-05-27T18:16:48.665306315Z" level=info msg="connecting to shim 4f6fd8148a4fa8c8d2dead2cdf9142ad9ba184e5dd524b5f194827abbc351013" address="unix:///run/containerd/s/fc4d852f6d49281762a8bf063ca842a2fd809f12591c9cb849c608c99c7543f9" protocol=ttrpc version=3 May 27 18:16:48.666883 containerd[1559]: time="2025-05-27T18:16:48.666669336Z" level=info msg="CreateContainer within sandbox \"321ca40fb76e1e32f76bd132d9a3c934bfb53bdef30bde855c8e2ff7da5a1037\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 27 18:16:48.673050 containerd[1559]: time="2025-05-27T18:16:48.673034489Z" level=info msg="Container b5db6d4a1002dd572aa0f9840c77a27a84c9154716500197eac610d36130c728: CDI devices from CRI Config.CDIDevices: []" May 27 18:16:48.680367 containerd[1559]: time="2025-05-27T18:16:48.680348582Z" level=info msg="CreateContainer within sandbox \"321ca40fb76e1e32f76bd132d9a3c934bfb53bdef30bde855c8e2ff7da5a1037\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b5db6d4a1002dd572aa0f9840c77a27a84c9154716500197eac610d36130c728\"" May 27 18:16:48.680885 containerd[1559]: time="2025-05-27T18:16:48.680865623Z" level=info msg="StartContainer for \"b5db6d4a1002dd572aa0f9840c77a27a84c9154716500197eac610d36130c728\"" May 27 18:16:48.682286 containerd[1559]: time="2025-05-27T18:16:48.682270543Z" level=info msg="connecting to shim b5db6d4a1002dd572aa0f9840c77a27a84c9154716500197eac610d36130c728" address="unix:///run/containerd/s/cd5172559183b873365bf20e39400400c6d605ed93117e2a2d6a71f65bbe2550" protocol=ttrpc version=3 May 27 18:16:48.683108 systemd[1]: Started cri-containerd-adf12e480b988d869cbd38ca23877338a93399c0814d07e5068993693f77b5b6.scope - libcontainer container adf12e480b988d869cbd38ca23877338a93399c0814d07e5068993693f77b5b6. May 27 18:16:48.694255 systemd[1]: Started cri-containerd-4f6fd8148a4fa8c8d2dead2cdf9142ad9ba184e5dd524b5f194827abbc351013.scope - libcontainer container 4f6fd8148a4fa8c8d2dead2cdf9142ad9ba184e5dd524b5f194827abbc351013. May 27 18:16:48.709227 systemd[1]: Started cri-containerd-b5db6d4a1002dd572aa0f9840c77a27a84c9154716500197eac610d36130c728.scope - libcontainer container b5db6d4a1002dd572aa0f9840c77a27a84c9154716500197eac610d36130c728. May 27 18:16:48.769201 containerd[1559]: time="2025-05-27T18:16:48.769146047Z" level=info msg="StartContainer for \"b5db6d4a1002dd572aa0f9840c77a27a84c9154716500197eac610d36130c728\" returns successfully" May 27 18:16:48.784725 containerd[1559]: time="2025-05-27T18:16:48.780967873Z" level=info msg="StartContainer for \"4f6fd8148a4fa8c8d2dead2cdf9142ad9ba184e5dd524b5f194827abbc351013\" returns successfully" May 27 18:16:48.800064 kubelet[2353]: I0527 18:16:48.799996 2353 kubelet_node_status.go:75] "Attempting to register node" node="172-237-129-174" May 27 18:16:48.801192 kubelet[2353]: E0527 18:16:48.801170 2353 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.237.129.174:6443/api/v1/nodes\": dial tcp 172.237.129.174:6443: connect: connection refused" node="172-237-129-174" May 27 18:16:48.811074 containerd[1559]: time="2025-05-27T18:16:48.811047088Z" level=info msg="StartContainer for \"adf12e480b988d869cbd38ca23877338a93399c0814d07e5068993693f77b5b6\" returns successfully" May 27 18:16:49.058370 kubelet[2353]: E0527 18:16:49.058321 2353 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-129-174\" not found" node="172-237-129-174" May 27 18:16:49.060936 kubelet[2353]: E0527 18:16:49.060841 2353 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:16:49.061030 kubelet[2353]: E0527 18:16:49.060952 2353 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-129-174\" not found" node="172-237-129-174" May 27 18:16:49.061152 kubelet[2353]: E0527 18:16:49.061118 2353 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:16:49.067828 kubelet[2353]: E0527 18:16:49.067800 2353 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-129-174\" not found" node="172-237-129-174" May 27 18:16:49.068000 kubelet[2353]: E0527 18:16:49.067892 2353 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:16:49.603804 kubelet[2353]: I0527 18:16:49.603758 2353 kubelet_node_status.go:75] "Attempting to register node" node="172-237-129-174" May 27 18:16:49.888467 kubelet[2353]: E0527 18:16:49.888313 2353 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-237-129-174\" not found" node="172-237-129-174" May 27 18:16:49.994906 kubelet[2353]: I0527 18:16:49.994843 2353 apiserver.go:52] "Watching apiserver" May 27 18:16:50.009170 kubelet[2353]: I0527 18:16:50.009122 2353 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 18:16:50.044649 kubelet[2353]: I0527 18:16:50.044606 2353 kubelet_node_status.go:78] "Successfully registered node" node="172-237-129-174" May 27 18:16:50.067426 kubelet[2353]: I0527 18:16:50.067381 2353 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-237-129-174" May 27 18:16:50.067706 kubelet[2353]: I0527 18:16:50.067666 2353 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-237-129-174" May 27 18:16:50.073727 kubelet[2353]: E0527 18:16:50.073693 2353 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-237-129-174\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-237-129-174" May 27 18:16:50.073897 kubelet[2353]: E0527 18:16:50.073841 2353 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:16:50.074443 kubelet[2353]: E0527 18:16:50.074268 2353 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-237-129-174\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-237-129-174" May 27 18:16:50.074443 kubelet[2353]: E0527 18:16:50.074348 2353 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:16:50.108937 kubelet[2353]: I0527 18:16:50.108894 2353 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-237-129-174" May 27 18:16:50.110626 kubelet[2353]: E0527 18:16:50.110557 2353 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-237-129-174\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-237-129-174" May 27 18:16:50.110626 kubelet[2353]: I0527 18:16:50.110596 2353 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-237-129-174" May 27 18:16:50.112266 kubelet[2353]: E0527 18:16:50.112033 2353 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-237-129-174\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-237-129-174" May 27 18:16:50.112266 kubelet[2353]: I0527 18:16:50.112250 2353 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-237-129-174" May 27 18:16:50.113619 kubelet[2353]: E0527 18:16:50.113586 2353 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-237-129-174\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-237-129-174" May 27 18:16:51.072510 kubelet[2353]: I0527 18:16:51.072325 2353 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-237-129-174" May 27 18:16:51.078456 kubelet[2353]: E0527 18:16:51.078380 2353 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:16:51.562217 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 27 18:16:51.660530 systemd[1]: Reload requested from client PID 2624 ('systemctl') (unit session-7.scope)... May 27 18:16:51.660551 systemd[1]: Reloading... May 27 18:16:51.808052 zram_generator::config[2682]: No configuration found. May 27 18:16:51.877175 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 18:16:52.007349 systemd[1]: Reloading finished in 346 ms. May 27 18:16:52.032951 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 27 18:16:52.052056 systemd[1]: kubelet.service: Deactivated successfully. May 27 18:16:52.052441 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 18:16:52.052503 systemd[1]: kubelet.service: Consumed 709ms CPU time, 131.5M memory peak. May 27 18:16:52.054526 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 18:16:52.241311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 18:16:52.252065 (kubelet)[2718]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 18:16:52.294664 kubelet[2718]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 18:16:52.294664 kubelet[2718]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 18:16:52.294664 kubelet[2718]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 18:16:52.295141 kubelet[2718]: I0527 18:16:52.294700 2718 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 18:16:52.302876 kubelet[2718]: I0527 18:16:52.302831 2718 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 27 18:16:52.302876 kubelet[2718]: I0527 18:16:52.302856 2718 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 18:16:52.303129 kubelet[2718]: I0527 18:16:52.303096 2718 server.go:954] "Client rotation is on, will bootstrap in background" May 27 18:16:52.304108 kubelet[2718]: I0527 18:16:52.304084 2718 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 27 18:16:52.306563 kubelet[2718]: I0527 18:16:52.306260 2718 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 18:16:52.311752 kubelet[2718]: I0527 18:16:52.311724 2718 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 18:16:52.317350 kubelet[2718]: I0527 18:16:52.317280 2718 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 18:16:52.317765 kubelet[2718]: I0527 18:16:52.317733 2718 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 18:16:52.317908 kubelet[2718]: I0527 18:16:52.317762 2718 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-237-129-174","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 18:16:52.318026 kubelet[2718]: I0527 18:16:52.317915 2718 topology_manager.go:138] "Creating topology manager with none policy" May 27 18:16:52.318026 kubelet[2718]: I0527 18:16:52.317925 2718 container_manager_linux.go:304] "Creating device plugin manager" May 27 18:16:52.318026 kubelet[2718]: I0527 18:16:52.317969 2718 state_mem.go:36] "Initialized new in-memory state store" May 27 18:16:52.319012 kubelet[2718]: I0527 18:16:52.318140 2718 kubelet.go:446] "Attempting to sync node with API server" May 27 18:16:52.319012 kubelet[2718]: I0527 18:16:52.318185 2718 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 18:16:52.319012 kubelet[2718]: I0527 18:16:52.318208 2718 kubelet.go:352] "Adding apiserver pod source" May 27 18:16:52.319012 kubelet[2718]: I0527 18:16:52.318217 2718 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 18:16:52.323042 kubelet[2718]: I0527 18:16:52.323010 2718 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 18:16:52.323428 kubelet[2718]: I0527 18:16:52.323401 2718 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 27 18:16:52.324804 kubelet[2718]: I0527 18:16:52.324689 2718 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 18:16:52.324804 kubelet[2718]: I0527 18:16:52.324720 2718 server.go:1287] "Started kubelet" May 27 18:16:52.328951 kubelet[2718]: I0527 18:16:52.328922 2718 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 18:16:52.332993 kubelet[2718]: I0527 18:16:52.331119 2718 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 18:16:52.332993 kubelet[2718]: I0527 18:16:52.332053 2718 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 18:16:52.332993 kubelet[2718]: E0527 18:16:52.332181 2718 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-237-129-174\" not found" May 27 18:16:52.333105 kubelet[2718]: I0527 18:16:52.333059 2718 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 18:16:52.333185 kubelet[2718]: I0527 18:16:52.333157 2718 reconciler.go:26] "Reconciler: start to sync state" May 27 18:16:52.338729 kubelet[2718]: I0527 18:16:52.338681 2718 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 27 18:16:52.339602 kubelet[2718]: I0527 18:16:52.339568 2718 server.go:479] "Adding debug handlers to kubelet server" May 27 18:16:52.341769 kubelet[2718]: I0527 18:16:52.341749 2718 factory.go:221] Registration of the systemd container factory successfully May 27 18:16:52.342176 kubelet[2718]: I0527 18:16:52.342152 2718 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 18:16:52.343323 kubelet[2718]: I0527 18:16:52.342848 2718 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 18:16:52.345003 kubelet[2718]: I0527 18:16:52.343659 2718 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 27 18:16:52.345003 kubelet[2718]: I0527 18:16:52.344560 2718 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 27 18:16:52.345003 kubelet[2718]: I0527 18:16:52.344577 2718 status_manager.go:227] "Starting to sync pod status with apiserver" May 27 18:16:52.345003 kubelet[2718]: I0527 18:16:52.344589 2718 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 18:16:52.345003 kubelet[2718]: I0527 18:16:52.344594 2718 kubelet.go:2382] "Starting kubelet main sync loop" May 27 18:16:52.345003 kubelet[2718]: E0527 18:16:52.344632 2718 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 18:16:52.345224 kubelet[2718]: I0527 18:16:52.345207 2718 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 18:16:52.354997 kubelet[2718]: I0527 18:16:52.354556 2718 factory.go:221] Registration of the containerd container factory successfully May 27 18:16:52.381442 kubelet[2718]: E0527 18:16:52.381393 2718 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 18:16:52.420220 kubelet[2718]: I0527 18:16:52.420179 2718 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 18:16:52.420220 kubelet[2718]: I0527 18:16:52.420195 2718 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 18:16:52.420897 kubelet[2718]: I0527 18:16:52.420396 2718 state_mem.go:36] "Initialized new in-memory state store" May 27 18:16:52.420897 kubelet[2718]: I0527 18:16:52.420551 2718 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 27 18:16:52.420897 kubelet[2718]: I0527 18:16:52.420564 2718 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 27 18:16:52.420897 kubelet[2718]: I0527 18:16:52.420585 2718 policy_none.go:49] "None policy: Start" May 27 18:16:52.420897 kubelet[2718]: I0527 18:16:52.420597 2718 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 18:16:52.420897 kubelet[2718]: I0527 18:16:52.420609 2718 state_mem.go:35] "Initializing new in-memory state store" May 27 18:16:52.420897 kubelet[2718]: I0527 18:16:52.420827 2718 state_mem.go:75] "Updated machine memory state" May 27 18:16:52.426317 kubelet[2718]: I0527 18:16:52.426290 2718 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 27 18:16:52.426488 kubelet[2718]: I0527 18:16:52.426458 2718 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 18:16:52.426539 kubelet[2718]: I0527 18:16:52.426479 2718 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 18:16:52.427342 kubelet[2718]: I0527 18:16:52.427036 2718 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 18:16:52.431122 kubelet[2718]: E0527 18:16:52.431102 2718 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 18:16:52.445803 kubelet[2718]: I0527 18:16:52.445759 2718 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-237-129-174" May 27 18:16:52.446334 kubelet[2718]: I0527 18:16:52.446174 2718 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-237-129-174" May 27 18:16:52.446449 kubelet[2718]: I0527 18:16:52.446419 2718 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-237-129-174" May 27 18:16:52.457308 kubelet[2718]: E0527 18:16:52.457266 2718 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-237-129-174\" already exists" pod="kube-system/kube-scheduler-172-237-129-174" May 27 18:16:52.530597 kubelet[2718]: I0527 18:16:52.530378 2718 kubelet_node_status.go:75] "Attempting to register node" node="172-237-129-174" May 27 18:16:52.538603 kubelet[2718]: I0527 18:16:52.538555 2718 kubelet_node_status.go:124] "Node was previously registered" node="172-237-129-174" May 27 18:16:52.538711 kubelet[2718]: I0527 18:16:52.538621 2718 kubelet_node_status.go:78] "Successfully registered node" node="172-237-129-174" May 27 18:16:52.634085 kubelet[2718]: I0527 18:16:52.634044 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/658b806bc4c78043f7741bcf958cb082-k8s-certs\") pod \"kube-controller-manager-172-237-129-174\" (UID: \"658b806bc4c78043f7741bcf958cb082\") " pod="kube-system/kube-controller-manager-172-237-129-174" May 27 18:16:52.634085 kubelet[2718]: I0527 18:16:52.634115 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/658b806bc4c78043f7741bcf958cb082-kubeconfig\") pod \"kube-controller-manager-172-237-129-174\" (UID: \"658b806bc4c78043f7741bcf958cb082\") " pod="kube-system/kube-controller-manager-172-237-129-174" May 27 18:16:52.634085 kubelet[2718]: I0527 18:16:52.634141 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/658b806bc4c78043f7741bcf958cb082-usr-share-ca-certificates\") pod \"kube-controller-manager-172-237-129-174\" (UID: \"658b806bc4c78043f7741bcf958cb082\") " pod="kube-system/kube-controller-manager-172-237-129-174" May 27 18:16:52.634646 kubelet[2718]: I0527 18:16:52.634164 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/658b806bc4c78043f7741bcf958cb082-ca-certs\") pod \"kube-controller-manager-172-237-129-174\" (UID: \"658b806bc4c78043f7741bcf958cb082\") " pod="kube-system/kube-controller-manager-172-237-129-174" May 27 18:16:52.634646 kubelet[2718]: I0527 18:16:52.634186 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dc4320da06d65f75ee8363be7423aa69-k8s-certs\") pod \"kube-apiserver-172-237-129-174\" (UID: \"dc4320da06d65f75ee8363be7423aa69\") " pod="kube-system/kube-apiserver-172-237-129-174" May 27 18:16:52.634646 kubelet[2718]: I0527 18:16:52.634204 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dc4320da06d65f75ee8363be7423aa69-usr-share-ca-certificates\") pod \"kube-apiserver-172-237-129-174\" (UID: \"dc4320da06d65f75ee8363be7423aa69\") " pod="kube-system/kube-apiserver-172-237-129-174" May 27 18:16:52.634646 kubelet[2718]: I0527 18:16:52.634223 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/658b806bc4c78043f7741bcf958cb082-flexvolume-dir\") pod \"kube-controller-manager-172-237-129-174\" (UID: \"658b806bc4c78043f7741bcf958cb082\") " pod="kube-system/kube-controller-manager-172-237-129-174" May 27 18:16:52.634646 kubelet[2718]: I0527 18:16:52.634242 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4905c8870ccf4c8d7957a8ee2ad16fff-kubeconfig\") pod \"kube-scheduler-172-237-129-174\" (UID: \"4905c8870ccf4c8d7957a8ee2ad16fff\") " pod="kube-system/kube-scheduler-172-237-129-174" May 27 18:16:52.634767 kubelet[2718]: I0527 18:16:52.634258 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dc4320da06d65f75ee8363be7423aa69-ca-certs\") pod \"kube-apiserver-172-237-129-174\" (UID: \"dc4320da06d65f75ee8363be7423aa69\") " pod="kube-system/kube-apiserver-172-237-129-174" May 27 18:16:52.755475 kubelet[2718]: E0527 18:16:52.754664 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:16:52.756274 kubelet[2718]: E0527 18:16:52.755855 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:16:52.758416 kubelet[2718]: E0527 18:16:52.758348 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:16:53.320092 kubelet[2718]: I0527 18:16:53.320051 2718 apiserver.go:52] "Watching apiserver" May 27 18:16:53.333229 kubelet[2718]: I0527 18:16:53.333195 2718 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 18:16:53.397696 kubelet[2718]: I0527 18:16:53.397663 2718 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-237-129-174" May 27 18:16:53.398990 kubelet[2718]: E0527 18:16:53.398381 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:16:53.399674 kubelet[2718]: E0527 18:16:53.399653 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:16:53.408648 kubelet[2718]: E0527 18:16:53.408617 2718 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-237-129-174\" already exists" pod="kube-system/kube-apiserver-172-237-129-174" May 27 18:16:53.408740 kubelet[2718]: E0527 18:16:53.408717 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:16:53.430567 kubelet[2718]: I0527 18:16:53.430452 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-237-129-174" podStartSLOduration=1.430441406 podStartE2EDuration="1.430441406s" podCreationTimestamp="2025-05-27 18:16:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 18:16:53.423383992 +0000 UTC m=+1.165928063" watchObservedRunningTime="2025-05-27 18:16:53.430441406 +0000 UTC m=+1.172985477" May 27 18:16:53.438697 kubelet[2718]: I0527 18:16:53.438658 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-237-129-174" podStartSLOduration=2.43865089 podStartE2EDuration="2.43865089s" podCreationTimestamp="2025-05-27 18:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 18:16:53.432235247 +0000 UTC m=+1.174779318" watchObservedRunningTime="2025-05-27 18:16:53.43865089 +0000 UTC m=+1.181194971" May 27 18:16:53.452381 kubelet[2718]: I0527 18:16:53.452323 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-237-129-174" podStartSLOduration=1.452293347 podStartE2EDuration="1.452293347s" podCreationTimestamp="2025-05-27 18:16:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 18:16:53.43925988 +0000 UTC m=+1.181803961" watchObservedRunningTime="2025-05-27 18:16:53.452293347 +0000 UTC m=+1.194837418" May 27 18:16:54.399271 kubelet[2718]: E0527 18:16:54.399093 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:16:54.399271 kubelet[2718]: E0527 18:16:54.399175 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:16:55.400571 kubelet[2718]: E0527 18:16:55.400528 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:16:56.223649 kubelet[2718]: E0527 18:16:56.223542 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:16:58.236685 kubelet[2718]: I0527 18:16:58.236575 2718 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 27 18:16:58.241002 containerd[1559]: time="2025-05-27T18:16:58.240264919Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 27 18:16:58.241457 kubelet[2718]: I0527 18:16:58.241443 2718 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 27 18:16:59.194485 systemd[1]: Created slice kubepods-besteffort-pod43cd6f01_c73e_4a5f_80b2_c66e2e294e0d.slice - libcontainer container kubepods-besteffort-pod43cd6f01_c73e_4a5f_80b2_c66e2e294e0d.slice. May 27 18:16:59.279113 kubelet[2718]: I0527 18:16:59.279091 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/43cd6f01-c73e-4a5f-80b2-c66e2e294e0d-lib-modules\") pod \"kube-proxy-lb6pg\" (UID: \"43cd6f01-c73e-4a5f-80b2-c66e2e294e0d\") " pod="kube-system/kube-proxy-lb6pg" May 27 18:16:59.279766 kubelet[2718]: I0527 18:16:59.279742 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqvl7\" (UniqueName: \"kubernetes.io/projected/43cd6f01-c73e-4a5f-80b2-c66e2e294e0d-kube-api-access-lqvl7\") pod \"kube-proxy-lb6pg\" (UID: \"43cd6f01-c73e-4a5f-80b2-c66e2e294e0d\") " pod="kube-system/kube-proxy-lb6pg" May 27 18:16:59.280059 kubelet[2718]: I0527 18:16:59.280033 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/43cd6f01-c73e-4a5f-80b2-c66e2e294e0d-kube-proxy\") pod \"kube-proxy-lb6pg\" (UID: \"43cd6f01-c73e-4a5f-80b2-c66e2e294e0d\") " pod="kube-system/kube-proxy-lb6pg" May 27 18:16:59.280168 kubelet[2718]: I0527 18:16:59.280157 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/43cd6f01-c73e-4a5f-80b2-c66e2e294e0d-xtables-lock\") pod \"kube-proxy-lb6pg\" (UID: \"43cd6f01-c73e-4a5f-80b2-c66e2e294e0d\") " pod="kube-system/kube-proxy-lb6pg" May 27 18:16:59.316135 systemd[1]: Created slice kubepods-besteffort-pod0154f0cd_9f34_4749_9209_d663236ef364.slice - libcontainer container kubepods-besteffort-pod0154f0cd_9f34_4749_9209_d663236ef364.slice. May 27 18:16:59.359952 kubelet[2718]: E0527 18:16:59.359688 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:16:59.380701 kubelet[2718]: I0527 18:16:59.380678 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7p48\" (UniqueName: \"kubernetes.io/projected/0154f0cd-9f34-4749-9209-d663236ef364-kube-api-access-d7p48\") pod \"tigera-operator-844669ff44-9q5k7\" (UID: \"0154f0cd-9f34-4749-9209-d663236ef364\") " pod="tigera-operator/tigera-operator-844669ff44-9q5k7" May 27 18:16:59.381062 kubelet[2718]: I0527 18:16:59.381033 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0154f0cd-9f34-4749-9209-d663236ef364-var-lib-calico\") pod \"tigera-operator-844669ff44-9q5k7\" (UID: \"0154f0cd-9f34-4749-9209-d663236ef364\") " pod="tigera-operator/tigera-operator-844669ff44-9q5k7" May 27 18:16:59.409389 kubelet[2718]: E0527 18:16:59.406929 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:16:59.504217 kubelet[2718]: E0527 18:16:59.504051 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:16:59.504745 containerd[1559]: time="2025-05-27T18:16:59.504571331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lb6pg,Uid:43cd6f01-c73e-4a5f-80b2-c66e2e294e0d,Namespace:kube-system,Attempt:0,}" May 27 18:16:59.523903 containerd[1559]: time="2025-05-27T18:16:59.523628710Z" level=info msg="connecting to shim 5263f6eebe9140f0a0609d68ad5fc0dad394594195539aad40bcb379dc2cd317" address="unix:///run/containerd/s/e175a8d147ac65acbbd5a28c0c6aba0d9b3e641568ba0b72ff4954f99191856b" namespace=k8s.io protocol=ttrpc version=3 May 27 18:16:59.527600 kubelet[2718]: E0527 18:16:59.527580 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:16:59.560104 systemd[1]: Started cri-containerd-5263f6eebe9140f0a0609d68ad5fc0dad394594195539aad40bcb379dc2cd317.scope - libcontainer container 5263f6eebe9140f0a0609d68ad5fc0dad394594195539aad40bcb379dc2cd317. May 27 18:16:59.581216 containerd[1559]: time="2025-05-27T18:16:59.581176719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lb6pg,Uid:43cd6f01-c73e-4a5f-80b2-c66e2e294e0d,Namespace:kube-system,Attempt:0,} returns sandbox id \"5263f6eebe9140f0a0609d68ad5fc0dad394594195539aad40bcb379dc2cd317\"" May 27 18:16:59.582414 kubelet[2718]: E0527 18:16:59.582393 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:16:59.584943 containerd[1559]: time="2025-05-27T18:16:59.584903151Z" level=info msg="CreateContainer within sandbox \"5263f6eebe9140f0a0609d68ad5fc0dad394594195539aad40bcb379dc2cd317\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 27 18:16:59.597999 containerd[1559]: time="2025-05-27T18:16:59.597188307Z" level=info msg="Container 2f1b03c3a7dc377121d33cdb6cb704dbb1a1a8a752ea1c18de666f0ce16c9f43: CDI devices from CRI Config.CDIDevices: []" May 27 18:16:59.603210 containerd[1559]: time="2025-05-27T18:16:59.603193210Z" level=info msg="CreateContainer within sandbox \"5263f6eebe9140f0a0609d68ad5fc0dad394594195539aad40bcb379dc2cd317\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2f1b03c3a7dc377121d33cdb6cb704dbb1a1a8a752ea1c18de666f0ce16c9f43\"" May 27 18:16:59.604152 containerd[1559]: time="2025-05-27T18:16:59.604130151Z" level=info msg="StartContainer for \"2f1b03c3a7dc377121d33cdb6cb704dbb1a1a8a752ea1c18de666f0ce16c9f43\"" May 27 18:16:59.605671 containerd[1559]: time="2025-05-27T18:16:59.605613401Z" level=info msg="connecting to shim 2f1b03c3a7dc377121d33cdb6cb704dbb1a1a8a752ea1c18de666f0ce16c9f43" address="unix:///run/containerd/s/e175a8d147ac65acbbd5a28c0c6aba0d9b3e641568ba0b72ff4954f99191856b" protocol=ttrpc version=3 May 27 18:16:59.620902 containerd[1559]: time="2025-05-27T18:16:59.620853979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-9q5k7,Uid:0154f0cd-9f34-4749-9209-d663236ef364,Namespace:tigera-operator,Attempt:0,}" May 27 18:16:59.623150 systemd[1]: Started cri-containerd-2f1b03c3a7dc377121d33cdb6cb704dbb1a1a8a752ea1c18de666f0ce16c9f43.scope - libcontainer container 2f1b03c3a7dc377121d33cdb6cb704dbb1a1a8a752ea1c18de666f0ce16c9f43. May 27 18:16:59.647007 containerd[1559]: time="2025-05-27T18:16:59.644150731Z" level=info msg="connecting to shim 5dc08c28e5a85970fbd5904feb59632bef82c1d18ae512b5100fb1c081ffd683" address="unix:///run/containerd/s/94763adc9dea651bb1764befb0b684762180e149dcfb632a240d95df13e34373" namespace=k8s.io protocol=ttrpc version=3 May 27 18:16:59.670101 systemd[1]: Started cri-containerd-5dc08c28e5a85970fbd5904feb59632bef82c1d18ae512b5100fb1c081ffd683.scope - libcontainer container 5dc08c28e5a85970fbd5904feb59632bef82c1d18ae512b5100fb1c081ffd683. May 27 18:16:59.686434 containerd[1559]: time="2025-05-27T18:16:59.686263132Z" level=info msg="StartContainer for \"2f1b03c3a7dc377121d33cdb6cb704dbb1a1a8a752ea1c18de666f0ce16c9f43\" returns successfully" May 27 18:16:59.732889 containerd[1559]: time="2025-05-27T18:16:59.732838775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-9q5k7,Uid:0154f0cd-9f34-4749-9209-d663236ef364,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"5dc08c28e5a85970fbd5904feb59632bef82c1d18ae512b5100fb1c081ffd683\"" May 27 18:16:59.736112 containerd[1559]: time="2025-05-27T18:16:59.736067976Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 27 18:17:00.392904 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3767527216.mount: Deactivated successfully. May 27 18:17:00.412336 kubelet[2718]: E0527 18:17:00.412304 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:17:00.413297 kubelet[2718]: E0527 18:17:00.413223 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:17:00.413665 kubelet[2718]: E0527 18:17:00.413648 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:17:00.749706 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount481590154.mount: Deactivated successfully. May 27 18:17:01.122522 containerd[1559]: time="2025-05-27T18:17:01.122436979Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:17:01.123738 containerd[1559]: time="2025-05-27T18:17:01.123543210Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=25055451" May 27 18:17:01.124618 containerd[1559]: time="2025-05-27T18:17:01.124579190Z" level=info msg="ImageCreate event name:\"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:17:01.127666 containerd[1559]: time="2025-05-27T18:17:01.127630322Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:17:01.128263 containerd[1559]: time="2025-05-27T18:17:01.128183852Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"25051446\" in 1.392065526s" May 27 18:17:01.128263 containerd[1559]: time="2025-05-27T18:17:01.128260592Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\"" May 27 18:17:01.132650 containerd[1559]: time="2025-05-27T18:17:01.132620884Z" level=info msg="CreateContainer within sandbox \"5dc08c28e5a85970fbd5904feb59632bef82c1d18ae512b5100fb1c081ffd683\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 27 18:17:01.141556 containerd[1559]: time="2025-05-27T18:17:01.141521749Z" level=info msg="Container a124ab82a563ebf460b17214f56c79ed778f0a72167a83221de619c29b3a5aa9: CDI devices from CRI Config.CDIDevices: []" May 27 18:17:01.152293 containerd[1559]: time="2025-05-27T18:17:01.152235164Z" level=info msg="CreateContainer within sandbox \"5dc08c28e5a85970fbd5904feb59632bef82c1d18ae512b5100fb1c081ffd683\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a124ab82a563ebf460b17214f56c79ed778f0a72167a83221de619c29b3a5aa9\"" May 27 18:17:01.153084 containerd[1559]: time="2025-05-27T18:17:01.153036954Z" level=info msg="StartContainer for \"a124ab82a563ebf460b17214f56c79ed778f0a72167a83221de619c29b3a5aa9\"" May 27 18:17:01.154291 containerd[1559]: time="2025-05-27T18:17:01.154228085Z" level=info msg="connecting to shim a124ab82a563ebf460b17214f56c79ed778f0a72167a83221de619c29b3a5aa9" address="unix:///run/containerd/s/94763adc9dea651bb1764befb0b684762180e149dcfb632a240d95df13e34373" protocol=ttrpc version=3 May 27 18:17:01.176104 systemd[1]: Started cri-containerd-a124ab82a563ebf460b17214f56c79ed778f0a72167a83221de619c29b3a5aa9.scope - libcontainer container a124ab82a563ebf460b17214f56c79ed778f0a72167a83221de619c29b3a5aa9. May 27 18:17:01.213725 containerd[1559]: time="2025-05-27T18:17:01.213677725Z" level=info msg="StartContainer for \"a124ab82a563ebf460b17214f56c79ed778f0a72167a83221de619c29b3a5aa9\" returns successfully" May 27 18:17:01.415306 kubelet[2718]: E0527 18:17:01.414711 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:17:01.423422 kubelet[2718]: I0527 18:17:01.423364 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lb6pg" podStartSLOduration=2.42334781 podStartE2EDuration="2.42334781s" podCreationTimestamp="2025-05-27 18:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 18:17:00.421617479 +0000 UTC m=+8.164161540" watchObservedRunningTime="2025-05-27 18:17:01.42334781 +0000 UTC m=+9.165891881" May 27 18:17:06.194041 update_engine[1540]: I20250527 18:17:06.193915 1540 update_attempter.cc:509] Updating boot flags... May 27 18:17:06.232839 kubelet[2718]: E0527 18:17:06.232401 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:17:06.249971 kubelet[2718]: I0527 18:17:06.249510 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-844669ff44-9q5k7" podStartSLOduration=5.853660423 podStartE2EDuration="7.249495141s" podCreationTimestamp="2025-05-27 18:16:59 +0000 UTC" firstStartedPulling="2025-05-27 18:16:59.733791225 +0000 UTC m=+7.476335296" lastFinishedPulling="2025-05-27 18:17:01.129625943 +0000 UTC m=+8.872170014" observedRunningTime="2025-05-27 18:17:01.4241539 +0000 UTC m=+9.166697971" watchObservedRunningTime="2025-05-27 18:17:06.249495141 +0000 UTC m=+13.992039222" May 27 18:17:06.654240 sudo[1794]: pam_unix(sudo:session): session closed for user root May 27 18:17:06.711158 sshd[1793]: Connection closed by 139.178.89.65 port 47248 May 27 18:17:06.712412 sshd-session[1791]: pam_unix(sshd:session): session closed for user core May 27 18:17:06.717697 systemd-logind[1534]: Session 7 logged out. Waiting for processes to exit. May 27 18:17:06.718435 systemd[1]: sshd@6-172.237.129.174:22-139.178.89.65:47248.service: Deactivated successfully. May 27 18:17:06.722785 systemd[1]: session-7.scope: Deactivated successfully. May 27 18:17:06.723930 systemd[1]: session-7.scope: Consumed 3.774s CPU time, 228.3M memory peak. May 27 18:17:06.727838 systemd-logind[1534]: Removed session 7. May 27 18:17:09.714553 systemd[1]: Created slice kubepods-besteffort-podb6af5459_9338_4445_a947_616fa0fab20c.slice - libcontainer container kubepods-besteffort-podb6af5459_9338_4445_a947_616fa0fab20c.slice. May 27 18:17:09.721697 kubelet[2718]: W0527 18:17:09.721656 2718 reflector.go:569] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:172-237-129-174" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node '172-237-129-174' and this object May 27 18:17:09.722095 kubelet[2718]: E0527 18:17:09.721701 2718 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"typha-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"typha-certs\" is forbidden: User \"system:node:172-237-129-174\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node '172-237-129-174' and this object" logger="UnhandledError" May 27 18:17:09.722095 kubelet[2718]: W0527 18:17:09.721742 2718 reflector.go:569] object-"calico-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:172-237-129-174" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node '172-237-129-174' and this object May 27 18:17:09.722095 kubelet[2718]: E0527 18:17:09.721752 2718 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:172-237-129-174\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node '172-237-129-174' and this object" logger="UnhandledError" May 27 18:17:09.722095 kubelet[2718]: W0527 18:17:09.721782 2718 reflector.go:569] object-"calico-system"/"tigera-ca-bundle": failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:172-237-129-174" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node '172-237-129-174' and this object May 27 18:17:09.722095 kubelet[2718]: E0527 18:17:09.721790 2718 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"tigera-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"tigera-ca-bundle\" is forbidden: User \"system:node:172-237-129-174\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node '172-237-129-174' and this object" logger="UnhandledError" May 27 18:17:09.722233 kubelet[2718]: I0527 18:17:09.721828 2718 status_manager.go:890] "Failed to get status for pod" podUID="b6af5459-9338-4445-a947-616fa0fab20c" pod="calico-system/calico-typha-78c7fc97fb-fqpfg" err="pods \"calico-typha-78c7fc97fb-fqpfg\" is forbidden: User \"system:node:172-237-129-174\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node '172-237-129-174' and this object" May 27 18:17:09.752382 kubelet[2718]: I0527 18:17:09.752331 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6af5459-9338-4445-a947-616fa0fab20c-tigera-ca-bundle\") pod \"calico-typha-78c7fc97fb-fqpfg\" (UID: \"b6af5459-9338-4445-a947-616fa0fab20c\") " pod="calico-system/calico-typha-78c7fc97fb-fqpfg" May 27 18:17:09.752382 kubelet[2718]: I0527 18:17:09.752370 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b6af5459-9338-4445-a947-616fa0fab20c-typha-certs\") pod \"calico-typha-78c7fc97fb-fqpfg\" (UID: \"b6af5459-9338-4445-a947-616fa0fab20c\") " pod="calico-system/calico-typha-78c7fc97fb-fqpfg" May 27 18:17:09.752382 kubelet[2718]: I0527 18:17:09.752390 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6jwk\" (UniqueName: \"kubernetes.io/projected/b6af5459-9338-4445-a947-616fa0fab20c-kube-api-access-t6jwk\") pod \"calico-typha-78c7fc97fb-fqpfg\" (UID: \"b6af5459-9338-4445-a947-616fa0fab20c\") " pod="calico-system/calico-typha-78c7fc97fb-fqpfg" May 27 18:17:09.927964 systemd[1]: Created slice kubepods-besteffort-pod769f3661_7e2d_4d36_80d6_3bcc3316b789.slice - libcontainer container kubepods-besteffort-pod769f3661_7e2d_4d36_80d6_3bcc3316b789.slice. May 27 18:17:09.954614 kubelet[2718]: I0527 18:17:09.954323 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/769f3661-7e2d-4d36-80d6-3bcc3316b789-cni-bin-dir\") pod \"calico-node-n2rpl\" (UID: \"769f3661-7e2d-4d36-80d6-3bcc3316b789\") " pod="calico-system/calico-node-n2rpl" May 27 18:17:09.954614 kubelet[2718]: I0527 18:17:09.954373 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/769f3661-7e2d-4d36-80d6-3bcc3316b789-policysync\") pod \"calico-node-n2rpl\" (UID: \"769f3661-7e2d-4d36-80d6-3bcc3316b789\") " pod="calico-system/calico-node-n2rpl" May 27 18:17:09.954614 kubelet[2718]: I0527 18:17:09.954391 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/769f3661-7e2d-4d36-80d6-3bcc3316b789-tigera-ca-bundle\") pod \"calico-node-n2rpl\" (UID: \"769f3661-7e2d-4d36-80d6-3bcc3316b789\") " pod="calico-system/calico-node-n2rpl" May 27 18:17:09.954614 kubelet[2718]: I0527 18:17:09.954407 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/769f3661-7e2d-4d36-80d6-3bcc3316b789-cni-net-dir\") pod \"calico-node-n2rpl\" (UID: \"769f3661-7e2d-4d36-80d6-3bcc3316b789\") " pod="calico-system/calico-node-n2rpl" May 27 18:17:09.954614 kubelet[2718]: I0527 18:17:09.954421 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/769f3661-7e2d-4d36-80d6-3bcc3316b789-lib-modules\") pod \"calico-node-n2rpl\" (UID: \"769f3661-7e2d-4d36-80d6-3bcc3316b789\") " pod="calico-system/calico-node-n2rpl" May 27 18:17:09.954886 kubelet[2718]: I0527 18:17:09.954435 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/769f3661-7e2d-4d36-80d6-3bcc3316b789-var-run-calico\") pod \"calico-node-n2rpl\" (UID: \"769f3661-7e2d-4d36-80d6-3bcc3316b789\") " pod="calico-system/calico-node-n2rpl" May 27 18:17:09.954886 kubelet[2718]: I0527 18:17:09.954450 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/769f3661-7e2d-4d36-80d6-3bcc3316b789-cni-log-dir\") pod \"calico-node-n2rpl\" (UID: \"769f3661-7e2d-4d36-80d6-3bcc3316b789\") " pod="calico-system/calico-node-n2rpl" May 27 18:17:09.954886 kubelet[2718]: I0527 18:17:09.954463 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/769f3661-7e2d-4d36-80d6-3bcc3316b789-node-certs\") pod \"calico-node-n2rpl\" (UID: \"769f3661-7e2d-4d36-80d6-3bcc3316b789\") " pod="calico-system/calico-node-n2rpl" May 27 18:17:09.954886 kubelet[2718]: I0527 18:17:09.954477 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/769f3661-7e2d-4d36-80d6-3bcc3316b789-var-lib-calico\") pod \"calico-node-n2rpl\" (UID: \"769f3661-7e2d-4d36-80d6-3bcc3316b789\") " pod="calico-system/calico-node-n2rpl" May 27 18:17:09.954886 kubelet[2718]: I0527 18:17:09.954491 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/769f3661-7e2d-4d36-80d6-3bcc3316b789-xtables-lock\") pod \"calico-node-n2rpl\" (UID: \"769f3661-7e2d-4d36-80d6-3bcc3316b789\") " pod="calico-system/calico-node-n2rpl" May 27 18:17:09.955049 kubelet[2718]: I0527 18:17:09.954507 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bhpg\" (UniqueName: \"kubernetes.io/projected/769f3661-7e2d-4d36-80d6-3bcc3316b789-kube-api-access-8bhpg\") pod \"calico-node-n2rpl\" (UID: \"769f3661-7e2d-4d36-80d6-3bcc3316b789\") " pod="calico-system/calico-node-n2rpl" May 27 18:17:09.955049 kubelet[2718]: I0527 18:17:09.954524 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/769f3661-7e2d-4d36-80d6-3bcc3316b789-flexvol-driver-host\") pod \"calico-node-n2rpl\" (UID: \"769f3661-7e2d-4d36-80d6-3bcc3316b789\") " pod="calico-system/calico-node-n2rpl" May 27 18:17:10.056628 kubelet[2718]: E0527 18:17:10.056583 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.056628 kubelet[2718]: W0527 18:17:10.056608 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.056628 kubelet[2718]: E0527 18:17:10.056642 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.058538 kubelet[2718]: E0527 18:17:10.056800 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.058538 kubelet[2718]: W0527 18:17:10.056829 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.058538 kubelet[2718]: E0527 18:17:10.056847 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.058538 kubelet[2718]: E0527 18:17:10.057015 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.058538 kubelet[2718]: W0527 18:17:10.057021 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.058538 kubelet[2718]: E0527 18:17:10.057038 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.058538 kubelet[2718]: E0527 18:17:10.057207 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.058538 kubelet[2718]: W0527 18:17:10.057213 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.058538 kubelet[2718]: E0527 18:17:10.057232 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.058538 kubelet[2718]: E0527 18:17:10.057591 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.060220 kubelet[2718]: W0527 18:17:10.057598 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.060220 kubelet[2718]: E0527 18:17:10.057606 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.060220 kubelet[2718]: E0527 18:17:10.057728 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.060220 kubelet[2718]: W0527 18:17:10.057733 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.060220 kubelet[2718]: E0527 18:17:10.057740 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.060220 kubelet[2718]: E0527 18:17:10.057870 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.060220 kubelet[2718]: W0527 18:17:10.057876 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.060220 kubelet[2718]: E0527 18:17:10.057883 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.060220 kubelet[2718]: E0527 18:17:10.058048 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.060220 kubelet[2718]: W0527 18:17:10.058054 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.060428 kubelet[2718]: E0527 18:17:10.058060 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.060428 kubelet[2718]: E0527 18:17:10.058180 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.060428 kubelet[2718]: W0527 18:17:10.058186 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.060428 kubelet[2718]: E0527 18:17:10.058194 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.060428 kubelet[2718]: E0527 18:17:10.058299 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.060428 kubelet[2718]: W0527 18:17:10.058305 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.060428 kubelet[2718]: E0527 18:17:10.058311 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.060428 kubelet[2718]: E0527 18:17:10.058428 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.060428 kubelet[2718]: W0527 18:17:10.058436 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.060428 kubelet[2718]: E0527 18:17:10.058442 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.060648 kubelet[2718]: E0527 18:17:10.058585 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.060648 kubelet[2718]: W0527 18:17:10.058591 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.060648 kubelet[2718]: E0527 18:17:10.058597 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.060648 kubelet[2718]: E0527 18:17:10.058711 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.060648 kubelet[2718]: W0527 18:17:10.058717 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.060648 kubelet[2718]: E0527 18:17:10.058724 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.060648 kubelet[2718]: E0527 18:17:10.058839 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.060648 kubelet[2718]: W0527 18:17:10.058844 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.060648 kubelet[2718]: E0527 18:17:10.058851 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.060648 kubelet[2718]: E0527 18:17:10.058959 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.060841 kubelet[2718]: W0527 18:17:10.058964 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.060841 kubelet[2718]: E0527 18:17:10.058970 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.060841 kubelet[2718]: E0527 18:17:10.059340 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.060841 kubelet[2718]: W0527 18:17:10.059346 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.060841 kubelet[2718]: E0527 18:17:10.059353 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.060841 kubelet[2718]: E0527 18:17:10.059470 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.060841 kubelet[2718]: W0527 18:17:10.059476 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.060841 kubelet[2718]: E0527 18:17:10.059483 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.060841 kubelet[2718]: E0527 18:17:10.059590 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.060841 kubelet[2718]: W0527 18:17:10.059595 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.061777 kubelet[2718]: E0527 18:17:10.059602 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.061777 kubelet[2718]: E0527 18:17:10.059709 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.061777 kubelet[2718]: W0527 18:17:10.059715 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.061777 kubelet[2718]: E0527 18:17:10.059722 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.061777 kubelet[2718]: E0527 18:17:10.060067 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.061777 kubelet[2718]: W0527 18:17:10.060073 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.061777 kubelet[2718]: E0527 18:17:10.060079 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.072102 kubelet[2718]: E0527 18:17:10.072065 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.072102 kubelet[2718]: W0527 18:17:10.072089 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.072102 kubelet[2718]: E0527 18:17:10.072100 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.219791 kubelet[2718]: E0527 18:17:10.219728 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8fsl8" podUID="98c7e566-5a28-4427-8f66-2be6467e7c39" May 27 18:17:10.247460 kubelet[2718]: E0527 18:17:10.247419 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.247460 kubelet[2718]: W0527 18:17:10.247443 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.247460 kubelet[2718]: E0527 18:17:10.247468 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.247692 kubelet[2718]: E0527 18:17:10.247663 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.247692 kubelet[2718]: W0527 18:17:10.247680 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.247692 kubelet[2718]: E0527 18:17:10.247688 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.247994 kubelet[2718]: E0527 18:17:10.247963 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.248027 kubelet[2718]: W0527 18:17:10.247996 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.248027 kubelet[2718]: E0527 18:17:10.248006 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.248412 kubelet[2718]: E0527 18:17:10.248386 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.248412 kubelet[2718]: W0527 18:17:10.248404 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.248536 kubelet[2718]: E0527 18:17:10.248459 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.248756 kubelet[2718]: E0527 18:17:10.248726 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.248756 kubelet[2718]: W0527 18:17:10.248749 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.248756 kubelet[2718]: E0527 18:17:10.248759 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.249058 kubelet[2718]: E0527 18:17:10.248926 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.249058 kubelet[2718]: W0527 18:17:10.248934 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.249058 kubelet[2718]: E0527 18:17:10.248942 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.249275 kubelet[2718]: E0527 18:17:10.249127 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.249275 kubelet[2718]: W0527 18:17:10.249135 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.249275 kubelet[2718]: E0527 18:17:10.249143 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.249433 kubelet[2718]: E0527 18:17:10.249368 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.249433 kubelet[2718]: W0527 18:17:10.249382 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.249433 kubelet[2718]: E0527 18:17:10.249391 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.249646 kubelet[2718]: E0527 18:17:10.249628 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.249646 kubelet[2718]: W0527 18:17:10.249640 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.249646 kubelet[2718]: E0527 18:17:10.249648 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.250111 kubelet[2718]: E0527 18:17:10.250077 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.250111 kubelet[2718]: W0527 18:17:10.250099 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.250111 kubelet[2718]: E0527 18:17:10.250107 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.250304 kubelet[2718]: E0527 18:17:10.250264 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.250304 kubelet[2718]: W0527 18:17:10.250282 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.250304 kubelet[2718]: E0527 18:17:10.250290 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.250467 kubelet[2718]: E0527 18:17:10.250437 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.250467 kubelet[2718]: W0527 18:17:10.250455 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.250467 kubelet[2718]: E0527 18:17:10.250464 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.250771 kubelet[2718]: E0527 18:17:10.250637 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.250771 kubelet[2718]: W0527 18:17:10.250644 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.250771 kubelet[2718]: E0527 18:17:10.250652 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.250838 kubelet[2718]: E0527 18:17:10.250812 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.250838 kubelet[2718]: W0527 18:17:10.250820 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.250838 kubelet[2718]: E0527 18:17:10.250828 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.251021 kubelet[2718]: E0527 18:17:10.250991 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.251021 kubelet[2718]: W0527 18:17:10.251022 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.251075 kubelet[2718]: E0527 18:17:10.251031 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.251268 kubelet[2718]: E0527 18:17:10.251185 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.251268 kubelet[2718]: W0527 18:17:10.251200 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.251268 kubelet[2718]: E0527 18:17:10.251208 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.251394 kubelet[2718]: E0527 18:17:10.251368 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.251394 kubelet[2718]: W0527 18:17:10.251386 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.251394 kubelet[2718]: E0527 18:17:10.251393 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.251581 kubelet[2718]: E0527 18:17:10.251543 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.251581 kubelet[2718]: W0527 18:17:10.251560 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.251581 kubelet[2718]: E0527 18:17:10.251567 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.251748 kubelet[2718]: E0527 18:17:10.251716 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.251748 kubelet[2718]: W0527 18:17:10.251734 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.251748 kubelet[2718]: E0527 18:17:10.251741 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.252109 kubelet[2718]: E0527 18:17:10.251891 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.252109 kubelet[2718]: W0527 18:17:10.251899 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.252109 kubelet[2718]: E0527 18:17:10.251906 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.256949 kubelet[2718]: E0527 18:17:10.256905 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.256949 kubelet[2718]: W0527 18:17:10.256922 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.256949 kubelet[2718]: E0527 18:17:10.256935 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.257210 kubelet[2718]: I0527 18:17:10.257107 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/98c7e566-5a28-4427-8f66-2be6467e7c39-varrun\") pod \"csi-node-driver-8fsl8\" (UID: \"98c7e566-5a28-4427-8f66-2be6467e7c39\") " pod="calico-system/csi-node-driver-8fsl8" May 27 18:17:10.257300 kubelet[2718]: E0527 18:17:10.257279 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.257300 kubelet[2718]: W0527 18:17:10.257294 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.257300 kubelet[2718]: E0527 18:17:10.257308 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.257776 kubelet[2718]: E0527 18:17:10.257757 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.257776 kubelet[2718]: W0527 18:17:10.257774 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.257863 kubelet[2718]: E0527 18:17:10.257842 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.258306 kubelet[2718]: E0527 18:17:10.258287 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.258306 kubelet[2718]: W0527 18:17:10.258301 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.258390 kubelet[2718]: E0527 18:17:10.258311 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.258390 kubelet[2718]: I0527 18:17:10.258353 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/98c7e566-5a28-4427-8f66-2be6467e7c39-socket-dir\") pod \"csi-node-driver-8fsl8\" (UID: \"98c7e566-5a28-4427-8f66-2be6467e7c39\") " pod="calico-system/csi-node-driver-8fsl8" May 27 18:17:10.258854 kubelet[2718]: E0527 18:17:10.258833 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.258854 kubelet[2718]: W0527 18:17:10.258852 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.258904 kubelet[2718]: E0527 18:17:10.258867 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.258904 kubelet[2718]: I0527 18:17:10.258883 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98c7e566-5a28-4427-8f66-2be6467e7c39-kubelet-dir\") pod \"csi-node-driver-8fsl8\" (UID: \"98c7e566-5a28-4427-8f66-2be6467e7c39\") " pod="calico-system/csi-node-driver-8fsl8" May 27 18:17:10.259175 kubelet[2718]: E0527 18:17:10.259144 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.259175 kubelet[2718]: W0527 18:17:10.259160 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.259228 kubelet[2718]: E0527 18:17:10.259180 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.259228 kubelet[2718]: I0527 18:17:10.259198 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/98c7e566-5a28-4427-8f66-2be6467e7c39-registration-dir\") pod \"csi-node-driver-8fsl8\" (UID: \"98c7e566-5a28-4427-8f66-2be6467e7c39\") " pod="calico-system/csi-node-driver-8fsl8" May 27 18:17:10.259721 kubelet[2718]: E0527 18:17:10.259682 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.259721 kubelet[2718]: W0527 18:17:10.259699 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.259803 kubelet[2718]: E0527 18:17:10.259779 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.259832 kubelet[2718]: I0527 18:17:10.259804 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbgkf\" (UniqueName: \"kubernetes.io/projected/98c7e566-5a28-4427-8f66-2be6467e7c39-kube-api-access-gbgkf\") pod \"csi-node-driver-8fsl8\" (UID: \"98c7e566-5a28-4427-8f66-2be6467e7c39\") " pod="calico-system/csi-node-driver-8fsl8" May 27 18:17:10.260069 kubelet[2718]: E0527 18:17:10.260046 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.260069 kubelet[2718]: W0527 18:17:10.260062 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.260356 kubelet[2718]: E0527 18:17:10.260335 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.260762 kubelet[2718]: E0527 18:17:10.260740 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.260762 kubelet[2718]: W0527 18:17:10.260757 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.260879 kubelet[2718]: E0527 18:17:10.260839 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.261096 kubelet[2718]: E0527 18:17:10.261066 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.261096 kubelet[2718]: W0527 18:17:10.261081 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.261242 kubelet[2718]: E0527 18:17:10.261215 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.261559 kubelet[2718]: E0527 18:17:10.261542 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.261559 kubelet[2718]: W0527 18:17:10.261555 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.261652 kubelet[2718]: E0527 18:17:10.261632 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.261786 kubelet[2718]: E0527 18:17:10.261768 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.261786 kubelet[2718]: W0527 18:17:10.261782 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.261823 kubelet[2718]: E0527 18:17:10.261791 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.262114 kubelet[2718]: E0527 18:17:10.262093 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.262114 kubelet[2718]: W0527 18:17:10.262107 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.262166 kubelet[2718]: E0527 18:17:10.262117 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.262515 kubelet[2718]: E0527 18:17:10.262496 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.262515 kubelet[2718]: W0527 18:17:10.262512 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.262570 kubelet[2718]: E0527 18:17:10.262521 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.263064 kubelet[2718]: E0527 18:17:10.263045 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.263064 kubelet[2718]: W0527 18:17:10.263060 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.263127 kubelet[2718]: E0527 18:17:10.263070 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.361110 kubelet[2718]: E0527 18:17:10.360963 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.361110 kubelet[2718]: W0527 18:17:10.361011 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.361110 kubelet[2718]: E0527 18:17:10.361028 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.361402 kubelet[2718]: E0527 18:17:10.361381 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.361479 kubelet[2718]: W0527 18:17:10.361459 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.361662 kubelet[2718]: E0527 18:17:10.361627 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.361741 kubelet[2718]: E0527 18:17:10.361732 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.361791 kubelet[2718]: W0527 18:17:10.361782 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.361897 kubelet[2718]: E0527 18:17:10.361834 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.362045 kubelet[2718]: E0527 18:17:10.362036 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.362139 kubelet[2718]: W0527 18:17:10.362082 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.362139 kubelet[2718]: E0527 18:17:10.362105 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.362446 kubelet[2718]: E0527 18:17:10.362321 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.362446 kubelet[2718]: W0527 18:17:10.362330 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.362446 kubelet[2718]: E0527 18:17:10.362349 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.362720 kubelet[2718]: E0527 18:17:10.362691 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.362755 kubelet[2718]: W0527 18:17:10.362719 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.362783 kubelet[2718]: E0527 18:17:10.362757 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.363809 kubelet[2718]: E0527 18:17:10.363750 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.363809 kubelet[2718]: W0527 18:17:10.363768 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.364222 kubelet[2718]: E0527 18:17:10.364187 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.365484 kubelet[2718]: E0527 18:17:10.365455 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.365484 kubelet[2718]: W0527 18:17:10.365474 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.366167 kubelet[2718]: E0527 18:17:10.366124 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.366736 kubelet[2718]: E0527 18:17:10.366709 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.366736 kubelet[2718]: W0527 18:17:10.366728 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.366908 kubelet[2718]: E0527 18:17:10.366781 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.367374 kubelet[2718]: E0527 18:17:10.367347 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.367374 kubelet[2718]: W0527 18:17:10.367362 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.367679 kubelet[2718]: E0527 18:17:10.367650 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.368135 kubelet[2718]: E0527 18:17:10.368012 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.368135 kubelet[2718]: W0527 18:17:10.368030 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.368284 kubelet[2718]: E0527 18:17:10.368254 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.369002 kubelet[2718]: E0527 18:17:10.368669 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.369002 kubelet[2718]: W0527 18:17:10.368684 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.369002 kubelet[2718]: E0527 18:17:10.368733 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.369002 kubelet[2718]: E0527 18:17:10.368893 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.369002 kubelet[2718]: W0527 18:17:10.368901 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.369002 kubelet[2718]: E0527 18:17:10.369007 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.369160 kubelet[2718]: E0527 18:17:10.369134 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.369160 kubelet[2718]: W0527 18:17:10.369151 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.369257 kubelet[2718]: E0527 18:17:10.369230 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.369381 kubelet[2718]: E0527 18:17:10.369345 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.369381 kubelet[2718]: W0527 18:17:10.369361 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.369381 kubelet[2718]: E0527 18:17:10.369373 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.369562 kubelet[2718]: E0527 18:17:10.369535 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.369601 kubelet[2718]: W0527 18:17:10.369574 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.369601 kubelet[2718]: E0527 18:17:10.369594 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.369790 kubelet[2718]: E0527 18:17:10.369764 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.369790 kubelet[2718]: W0527 18:17:10.369781 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.369848 kubelet[2718]: E0527 18:17:10.369801 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.370075 kubelet[2718]: E0527 18:17:10.370037 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.370075 kubelet[2718]: W0527 18:17:10.370054 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.370221 kubelet[2718]: E0527 18:17:10.370133 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.370286 kubelet[2718]: E0527 18:17:10.370261 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.370286 kubelet[2718]: W0527 18:17:10.370278 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.370418 kubelet[2718]: E0527 18:17:10.370356 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.370494 kubelet[2718]: E0527 18:17:10.370468 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.370494 kubelet[2718]: W0527 18:17:10.370485 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.370639 kubelet[2718]: E0527 18:17:10.370595 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.370702 kubelet[2718]: E0527 18:17:10.370677 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.370702 kubelet[2718]: W0527 18:17:10.370693 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.370751 kubelet[2718]: E0527 18:17:10.370735 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.370945 kubelet[2718]: E0527 18:17:10.370869 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.370945 kubelet[2718]: W0527 18:17:10.370885 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.370945 kubelet[2718]: E0527 18:17:10.370909 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.371170 kubelet[2718]: E0527 18:17:10.371143 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.371170 kubelet[2718]: W0527 18:17:10.371162 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.371236 kubelet[2718]: E0527 18:17:10.371181 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.371678 kubelet[2718]: E0527 18:17:10.371619 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.371678 kubelet[2718]: W0527 18:17:10.371634 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.371678 kubelet[2718]: E0527 18:17:10.371646 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.371859 kubelet[2718]: E0527 18:17:10.371803 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.371859 kubelet[2718]: W0527 18:17:10.371817 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.371859 kubelet[2718]: E0527 18:17:10.371824 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.616324 kubelet[2718]: E0527 18:17:10.615993 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.616324 kubelet[2718]: W0527 18:17:10.616014 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.616324 kubelet[2718]: E0527 18:17:10.616032 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.623216 kubelet[2718]: E0527 18:17:10.623202 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.623289 kubelet[2718]: W0527 18:17:10.623279 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.623344 kubelet[2718]: E0527 18:17:10.623335 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.632409 kubelet[2718]: E0527 18:17:10.632397 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.632488 kubelet[2718]: W0527 18:17:10.632478 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.632576 kubelet[2718]: E0527 18:17:10.632535 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.785314 kubelet[2718]: E0527 18:17:10.785278 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.785570 kubelet[2718]: W0527 18:17:10.785311 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.785570 kubelet[2718]: E0527 18:17:10.785339 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.787304 kubelet[2718]: E0527 18:17:10.787267 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.787304 kubelet[2718]: W0527 18:17:10.787283 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.787304 kubelet[2718]: E0527 18:17:10.787298 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.834460 containerd[1559]: time="2025-05-27T18:17:10.834399213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-n2rpl,Uid:769f3661-7e2d-4d36-80d6-3bcc3316b789,Namespace:calico-system,Attempt:0,}" May 27 18:17:10.853755 kubelet[2718]: E0527 18:17:10.853644 2718 secret.go:189] Couldn't get secret calico-system/typha-certs: failed to sync secret cache: timed out waiting for the condition May 27 18:17:10.853867 kubelet[2718]: E0527 18:17:10.853778 2718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b6af5459-9338-4445-a947-616fa0fab20c-typha-certs podName:b6af5459-9338-4445-a947-616fa0fab20c nodeName:}" failed. No retries permitted until 2025-05-27 18:17:11.353750812 +0000 UTC m=+19.096294883 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "typha-certs" (UniqueName: "kubernetes.io/secret/b6af5459-9338-4445-a947-616fa0fab20c-typha-certs") pod "calico-typha-78c7fc97fb-fqpfg" (UID: "b6af5459-9338-4445-a947-616fa0fab20c") : failed to sync secret cache: timed out waiting for the condition May 27 18:17:10.860011 containerd[1559]: time="2025-05-27T18:17:10.859923614Z" level=info msg="connecting to shim ad3310328d0fbe0dca728e69835341b988defa769637ad8a1793019e285141a6" address="unix:///run/containerd/s/c7a2408691f90086130fae7378da96626f933907c34017b7168bbc8347c577d6" namespace=k8s.io protocol=ttrpc version=3 May 27 18:17:10.872153 kubelet[2718]: E0527 18:17:10.872086 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.872214 kubelet[2718]: W0527 18:17:10.872202 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.872301 kubelet[2718]: E0527 18:17:10.872250 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:10.886079 systemd[1]: Started cri-containerd-ad3310328d0fbe0dca728e69835341b988defa769637ad8a1793019e285141a6.scope - libcontainer container ad3310328d0fbe0dca728e69835341b988defa769637ad8a1793019e285141a6. May 27 18:17:10.915487 containerd[1559]: time="2025-05-27T18:17:10.915433806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-n2rpl,Uid:769f3661-7e2d-4d36-80d6-3bcc3316b789,Namespace:calico-system,Attempt:0,} returns sandbox id \"ad3310328d0fbe0dca728e69835341b988defa769637ad8a1793019e285141a6\"" May 27 18:17:10.918718 containerd[1559]: time="2025-05-27T18:17:10.918679111Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 27 18:17:10.973828 kubelet[2718]: E0527 18:17:10.973797 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:10.973828 kubelet[2718]: W0527 18:17:10.973824 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:10.973910 kubelet[2718]: E0527 18:17:10.973852 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:11.075362 kubelet[2718]: E0527 18:17:11.075332 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:11.075405 kubelet[2718]: W0527 18:17:11.075358 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:11.075405 kubelet[2718]: E0527 18:17:11.075398 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:11.176800 kubelet[2718]: E0527 18:17:11.176681 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:11.176800 kubelet[2718]: W0527 18:17:11.176716 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:11.176800 kubelet[2718]: E0527 18:17:11.176740 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:11.277750 kubelet[2718]: E0527 18:17:11.277704 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:11.277750 kubelet[2718]: W0527 18:17:11.277731 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:11.277750 kubelet[2718]: E0527 18:17:11.277756 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:11.345718 kubelet[2718]: E0527 18:17:11.345643 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8fsl8" podUID="98c7e566-5a28-4427-8f66-2be6467e7c39" May 27 18:17:11.379097 kubelet[2718]: E0527 18:17:11.379057 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:11.379097 kubelet[2718]: W0527 18:17:11.379081 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:11.379332 kubelet[2718]: E0527 18:17:11.379106 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:11.379432 kubelet[2718]: E0527 18:17:11.379413 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:11.379432 kubelet[2718]: W0527 18:17:11.379428 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:11.379503 kubelet[2718]: E0527 18:17:11.379439 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:11.379734 kubelet[2718]: E0527 18:17:11.379696 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:11.379734 kubelet[2718]: W0527 18:17:11.379719 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:11.379860 kubelet[2718]: E0527 18:17:11.379744 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:11.380000 kubelet[2718]: E0527 18:17:11.379958 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:11.380000 kubelet[2718]: W0527 18:17:11.379970 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:11.380075 kubelet[2718]: E0527 18:17:11.380004 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:11.380216 kubelet[2718]: E0527 18:17:11.380195 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:11.380216 kubelet[2718]: W0527 18:17:11.380210 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:11.380300 kubelet[2718]: E0527 18:17:11.380221 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:11.384502 kubelet[2718]: E0527 18:17:11.384469 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 18:17:11.384502 kubelet[2718]: W0527 18:17:11.384487 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 18:17:11.384502 kubelet[2718]: E0527 18:17:11.384497 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 18:17:11.519470 kubelet[2718]: E0527 18:17:11.519153 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:17:11.519996 containerd[1559]: time="2025-05-27T18:17:11.519642636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-78c7fc97fb-fqpfg,Uid:b6af5459-9338-4445-a947-616fa0fab20c,Namespace:calico-system,Attempt:0,}" May 27 18:17:11.541512 containerd[1559]: time="2025-05-27T18:17:11.541253216Z" level=info msg="connecting to shim f2e80d6554c65ccab0317961703fe3f945655da1e6ae30fa7c8ca16bf58c87dc" address="unix:///run/containerd/s/2722fd3e375c999319c8b441646b9d7719aad355596bd9b6af8698d0fbc8a805" namespace=k8s.io protocol=ttrpc version=3 May 27 18:17:11.574123 systemd[1]: Started cri-containerd-f2e80d6554c65ccab0317961703fe3f945655da1e6ae30fa7c8ca16bf58c87dc.scope - libcontainer container f2e80d6554c65ccab0317961703fe3f945655da1e6ae30fa7c8ca16bf58c87dc. May 27 18:17:11.652281 containerd[1559]: time="2025-05-27T18:17:11.652209088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-78c7fc97fb-fqpfg,Uid:b6af5459-9338-4445-a947-616fa0fab20c,Namespace:calico-system,Attempt:0,} returns sandbox id \"f2e80d6554c65ccab0317961703fe3f945655da1e6ae30fa7c8ca16bf58c87dc\"" May 27 18:17:11.653849 kubelet[2718]: E0527 18:17:11.653804 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:17:11.736437 containerd[1559]: time="2025-05-27T18:17:11.736372220Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:17:11.737960 containerd[1559]: time="2025-05-27T18:17:11.737922999Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=5934460" May 27 18:17:11.739154 containerd[1559]: time="2025-05-27T18:17:11.738619633Z" level=info msg="ImageCreate event name:\"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:17:11.742051 containerd[1559]: time="2025-05-27T18:17:11.742029126Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:17:11.743115 containerd[1559]: time="2025-05-27T18:17:11.743094040Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5934282\" in 824.368846ms" May 27 18:17:11.743209 containerd[1559]: time="2025-05-27T18:17:11.743193645Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\"" May 27 18:17:11.747414 containerd[1559]: time="2025-05-27T18:17:11.747361085Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 27 18:17:11.749540 containerd[1559]: time="2025-05-27T18:17:11.749495562Z" level=info msg="CreateContainer within sandbox \"ad3310328d0fbe0dca728e69835341b988defa769637ad8a1793019e285141a6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 27 18:17:11.755928 containerd[1559]: time="2025-05-27T18:17:11.755891174Z" level=info msg="Container 583be3242a05ed2ba49f96dd26960824f140a631e46e055e69ee1f25a639522f: CDI devices from CRI Config.CDIDevices: []" May 27 18:17:11.763714 containerd[1559]: time="2025-05-27T18:17:11.763668546Z" level=info msg="CreateContainer within sandbox \"ad3310328d0fbe0dca728e69835341b988defa769637ad8a1793019e285141a6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"583be3242a05ed2ba49f96dd26960824f140a631e46e055e69ee1f25a639522f\"" May 27 18:17:11.764570 containerd[1559]: time="2025-05-27T18:17:11.764496188Z" level=info msg="StartContainer for \"583be3242a05ed2ba49f96dd26960824f140a631e46e055e69ee1f25a639522f\"" May 27 18:17:11.766376 containerd[1559]: time="2025-05-27T18:17:11.766338571Z" level=info msg="connecting to shim 583be3242a05ed2ba49f96dd26960824f140a631e46e055e69ee1f25a639522f" address="unix:///run/containerd/s/c7a2408691f90086130fae7378da96626f933907c34017b7168bbc8347c577d6" protocol=ttrpc version=3 May 27 18:17:11.787174 systemd[1]: Started cri-containerd-583be3242a05ed2ba49f96dd26960824f140a631e46e055e69ee1f25a639522f.scope - libcontainer container 583be3242a05ed2ba49f96dd26960824f140a631e46e055e69ee1f25a639522f. May 27 18:17:11.834605 containerd[1559]: time="2025-05-27T18:17:11.834571780Z" level=info msg="StartContainer for \"583be3242a05ed2ba49f96dd26960824f140a631e46e055e69ee1f25a639522f\" returns successfully" May 27 18:17:11.846184 systemd[1]: cri-containerd-583be3242a05ed2ba49f96dd26960824f140a631e46e055e69ee1f25a639522f.scope: Deactivated successfully. May 27 18:17:11.848163 containerd[1559]: time="2025-05-27T18:17:11.848141094Z" level=info msg="received exit event container_id:\"583be3242a05ed2ba49f96dd26960824f140a631e46e055e69ee1f25a639522f\" id:\"583be3242a05ed2ba49f96dd26960824f140a631e46e055e69ee1f25a639522f\" pid:3365 exited_at:{seconds:1748369831 nanos:847527874}" May 27 18:17:11.848386 containerd[1559]: time="2025-05-27T18:17:11.848359205Z" level=info msg="TaskExit event in podsandbox handler container_id:\"583be3242a05ed2ba49f96dd26960824f140a631e46e055e69ee1f25a639522f\" id:\"583be3242a05ed2ba49f96dd26960824f140a631e46e055e69ee1f25a639522f\" pid:3365 exited_at:{seconds:1748369831 nanos:847527874}" May 27 18:17:12.067543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2504741679.mount: Deactivated successfully. May 27 18:17:13.225944 containerd[1559]: time="2025-05-27T18:17:13.225866850Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:17:13.227023 containerd[1559]: time="2025-05-27T18:17:13.226879226Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=33665828" May 27 18:17:13.228022 containerd[1559]: time="2025-05-27T18:17:13.227962023Z" level=info msg="ImageCreate event name:\"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:17:13.229795 containerd[1559]: time="2025-05-27T18:17:13.229751633Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:17:13.230382 containerd[1559]: time="2025-05-27T18:17:13.230343430Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"35158523\" in 1.482937463s" May 27 18:17:13.230473 containerd[1559]: time="2025-05-27T18:17:13.230457085Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\"" May 27 18:17:13.233265 containerd[1559]: time="2025-05-27T18:17:13.233245618Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 27 18:17:13.251466 containerd[1559]: time="2025-05-27T18:17:13.251420654Z" level=info msg="CreateContainer within sandbox \"f2e80d6554c65ccab0317961703fe3f945655da1e6ae30fa7c8ca16bf58c87dc\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 27 18:17:13.260492 containerd[1559]: time="2025-05-27T18:17:13.260453245Z" level=info msg="Container 58509b28011d818bedbd8953c4c3f924fa00c42be6402acea4106095d580de0f: CDI devices from CRI Config.CDIDevices: []" May 27 18:17:13.271050 containerd[1559]: time="2025-05-27T18:17:13.271015604Z" level=info msg="CreateContainer within sandbox \"f2e80d6554c65ccab0317961703fe3f945655da1e6ae30fa7c8ca16bf58c87dc\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"58509b28011d818bedbd8953c4c3f924fa00c42be6402acea4106095d580de0f\"" May 27 18:17:13.272100 containerd[1559]: time="2025-05-27T18:17:13.272071591Z" level=info msg="StartContainer for \"58509b28011d818bedbd8953c4c3f924fa00c42be6402acea4106095d580de0f\"" May 27 18:17:13.273841 containerd[1559]: time="2025-05-27T18:17:13.273805347Z" level=info msg="connecting to shim 58509b28011d818bedbd8953c4c3f924fa00c42be6402acea4106095d580de0f" address="unix:///run/containerd/s/2722fd3e375c999319c8b441646b9d7719aad355596bd9b6af8698d0fbc8a805" protocol=ttrpc version=3 May 27 18:17:13.299170 systemd[1]: Started cri-containerd-58509b28011d818bedbd8953c4c3f924fa00c42be6402acea4106095d580de0f.scope - libcontainer container 58509b28011d818bedbd8953c4c3f924fa00c42be6402acea4106095d580de0f. May 27 18:17:13.348621 kubelet[2718]: E0527 18:17:13.348557 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8fsl8" podUID="98c7e566-5a28-4427-8f66-2be6467e7c39" May 27 18:17:13.372422 containerd[1559]: time="2025-05-27T18:17:13.372283946Z" level=info msg="StartContainer for \"58509b28011d818bedbd8953c4c3f924fa00c42be6402acea4106095d580de0f\" returns successfully" May 27 18:17:13.459179 kubelet[2718]: E0527 18:17:13.459125 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:17:14.461353 kubelet[2718]: I0527 18:17:14.461297 2718 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 18:17:14.462909 kubelet[2718]: E0527 18:17:14.461705 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:17:15.345848 kubelet[2718]: E0527 18:17:15.345707 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8fsl8" podUID="98c7e566-5a28-4427-8f66-2be6467e7c39" May 27 18:17:15.380519 containerd[1559]: time="2025-05-27T18:17:15.380475299Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:17:15.381489 containerd[1559]: time="2025-05-27T18:17:15.381327192Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=70300568" May 27 18:17:15.382051 containerd[1559]: time="2025-05-27T18:17:15.382028060Z" level=info msg="ImageCreate event name:\"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:17:15.383540 containerd[1559]: time="2025-05-27T18:17:15.383513617Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:17:15.384059 containerd[1559]: time="2025-05-27T18:17:15.384037958Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"71793271\" in 2.150666735s" May 27 18:17:15.384128 containerd[1559]: time="2025-05-27T18:17:15.384114521Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\"" May 27 18:17:15.386565 containerd[1559]: time="2025-05-27T18:17:15.386533245Z" level=info msg="CreateContainer within sandbox \"ad3310328d0fbe0dca728e69835341b988defa769637ad8a1793019e285141a6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 27 18:17:15.391995 containerd[1559]: time="2025-05-27T18:17:15.391774100Z" level=info msg="Container bfc9c150127945f51fe4dbf06499620924feea65809ab805b7a7aba4612d9d80: CDI devices from CRI Config.CDIDevices: []" May 27 18:17:15.407336 containerd[1559]: time="2025-05-27T18:17:15.407313396Z" level=info msg="CreateContainer within sandbox \"ad3310328d0fbe0dca728e69835341b988defa769637ad8a1793019e285141a6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"bfc9c150127945f51fe4dbf06499620924feea65809ab805b7a7aba4612d9d80\"" May 27 18:17:15.408382 containerd[1559]: time="2025-05-27T18:17:15.408352038Z" level=info msg="StartContainer for \"bfc9c150127945f51fe4dbf06499620924feea65809ab805b7a7aba4612d9d80\"" May 27 18:17:15.409942 containerd[1559]: time="2025-05-27T18:17:15.409912059Z" level=info msg="connecting to shim bfc9c150127945f51fe4dbf06499620924feea65809ab805b7a7aba4612d9d80" address="unix:///run/containerd/s/c7a2408691f90086130fae7378da96626f933907c34017b7168bbc8347c577d6" protocol=ttrpc version=3 May 27 18:17:15.430079 systemd[1]: Started cri-containerd-bfc9c150127945f51fe4dbf06499620924feea65809ab805b7a7aba4612d9d80.scope - libcontainer container bfc9c150127945f51fe4dbf06499620924feea65809ab805b7a7aba4612d9d80. May 27 18:17:15.465307 containerd[1559]: time="2025-05-27T18:17:15.464968498Z" level=info msg="StartContainer for \"bfc9c150127945f51fe4dbf06499620924feea65809ab805b7a7aba4612d9d80\" returns successfully" May 27 18:17:15.844193 systemd[1]: cri-containerd-bfc9c150127945f51fe4dbf06499620924feea65809ab805b7a7aba4612d9d80.scope: Deactivated successfully. May 27 18:17:15.845519 containerd[1559]: time="2025-05-27T18:17:15.844936627Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bfc9c150127945f51fe4dbf06499620924feea65809ab805b7a7aba4612d9d80\" id:\"bfc9c150127945f51fe4dbf06499620924feea65809ab805b7a7aba4612d9d80\" pid:3462 exited_at:{seconds:1748369835 nanos:844643756}" May 27 18:17:15.845519 containerd[1559]: time="2025-05-27T18:17:15.845023860Z" level=info msg="received exit event container_id:\"bfc9c150127945f51fe4dbf06499620924feea65809ab805b7a7aba4612d9d80\" id:\"bfc9c150127945f51fe4dbf06499620924feea65809ab805b7a7aba4612d9d80\" pid:3462 exited_at:{seconds:1748369835 nanos:844643756}" May 27 18:17:15.845234 systemd[1]: cri-containerd-bfc9c150127945f51fe4dbf06499620924feea65809ab805b7a7aba4612d9d80.scope: Consumed 412ms CPU time, 194.8M memory peak, 170.9M written to disk. May 27 18:17:15.863895 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bfc9c150127945f51fe4dbf06499620924feea65809ab805b7a7aba4612d9d80-rootfs.mount: Deactivated successfully. May 27 18:17:15.879002 kubelet[2718]: I0527 18:17:15.878740 2718 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 27 18:17:15.924008 kubelet[2718]: I0527 18:17:15.923790 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-78c7fc97fb-fqpfg" podStartSLOduration=5.347234123 podStartE2EDuration="6.923773365s" podCreationTimestamp="2025-05-27 18:17:09 +0000 UTC" firstStartedPulling="2025-05-27 18:17:11.655056792 +0000 UTC m=+19.397600873" lastFinishedPulling="2025-05-27 18:17:13.231596044 +0000 UTC m=+20.974140115" observedRunningTime="2025-05-27 18:17:13.487483976 +0000 UTC m=+21.230028057" watchObservedRunningTime="2025-05-27 18:17:15.923773365 +0000 UTC m=+23.666317436" May 27 18:17:15.939180 systemd[1]: Created slice kubepods-besteffort-pod7eea353f_a1bb_479c_852d_42d62f71369b.slice - libcontainer container kubepods-besteffort-pod7eea353f_a1bb_479c_852d_42d62f71369b.slice. May 27 18:17:15.960824 systemd[1]: Created slice kubepods-besteffort-poda7a54593_937f_4d42_a4fe_5eb5065d47aa.slice - libcontainer container kubepods-besteffort-poda7a54593_937f_4d42_a4fe_5eb5065d47aa.slice. May 27 18:17:15.965461 kubelet[2718]: W0527 18:17:15.965424 2718 reflector.go:569] object-"calico-system"/"goldmane-ca-bundle": failed to list *v1.ConfigMap: configmaps "goldmane-ca-bundle" is forbidden: User "system:node:172-237-129-174" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node '172-237-129-174' and this object May 27 18:17:15.965564 kubelet[2718]: E0527 18:17:15.965467 2718 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane-ca-bundle\" is forbidden: User \"system:node:172-237-129-174\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node '172-237-129-174' and this object" logger="UnhandledError" May 27 18:17:15.965564 kubelet[2718]: W0527 18:17:15.965501 2718 reflector.go:569] object-"calico-system"/"goldmane-key-pair": failed to list *v1.Secret: secrets "goldmane-key-pair" is forbidden: User "system:node:172-237-129-174" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node '172-237-129-174' and this object May 27 18:17:15.965564 kubelet[2718]: E0527 18:17:15.965511 2718 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"goldmane-key-pair\" is forbidden: User \"system:node:172-237-129-174\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node '172-237-129-174' and this object" logger="UnhandledError" May 27 18:17:15.965564 kubelet[2718]: W0527 18:17:15.965537 2718 reflector.go:569] object-"calico-system"/"goldmane": failed to list *v1.ConfigMap: configmaps "goldmane" is forbidden: User "system:node:172-237-129-174" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node '172-237-129-174' and this object May 27 18:17:15.965564 kubelet[2718]: E0527 18:17:15.965546 2718 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane\" is forbidden: User \"system:node:172-237-129-174\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node '172-237-129-174' and this object" logger="UnhandledError" May 27 18:17:15.969604 systemd[1]: Created slice kubepods-burstable-pod0cd2286d_e934_4c13_8046_58f70aab7816.slice - libcontainer container kubepods-burstable-pod0cd2286d_e934_4c13_8046_58f70aab7816.slice. May 27 18:17:15.980704 systemd[1]: Created slice kubepods-burstable-poddd5bd327_01f8_498a_b961_849e97947b28.slice - libcontainer container kubepods-burstable-poddd5bd327_01f8_498a_b961_849e97947b28.slice. May 27 18:17:15.985110 systemd[1]: Created slice kubepods-besteffort-pod86e25ba7_7f97_4a6e_b732_651388f4e9ac.slice - libcontainer container kubepods-besteffort-pod86e25ba7_7f97_4a6e_b732_651388f4e9ac.slice. May 27 18:17:15.992807 systemd[1]: Created slice kubepods-besteffort-pod84df68e6_c45b_4638_b0fd_ffc34714916b.slice - libcontainer container kubepods-besteffort-pod84df68e6_c45b_4638_b0fd_ffc34714916b.slice. May 27 18:17:16.000302 systemd[1]: Created slice kubepods-besteffort-pod3ce55071_e071_4152_8491_d9e57fa88f84.slice - libcontainer container kubepods-besteffort-pod3ce55071_e071_4152_8491_d9e57fa88f84.slice. May 27 18:17:16.015536 kubelet[2718]: I0527 18:17:16.015428 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0cd2286d-e934-4c13-8046-58f70aab7816-config-volume\") pod \"coredns-668d6bf9bc-wz2n4\" (UID: \"0cd2286d-e934-4c13-8046-58f70aab7816\") " pod="kube-system/coredns-668d6bf9bc-wz2n4" May 27 18:17:16.015536 kubelet[2718]: I0527 18:17:16.015502 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7a54593-937f-4d42-a4fe-5eb5065d47aa-tigera-ca-bundle\") pod \"calico-kube-controllers-6cd666d6b8-lfdk4\" (UID: \"a7a54593-937f-4d42-a4fe-5eb5065d47aa\") " pod="calico-system/calico-kube-controllers-6cd666d6b8-lfdk4" May 27 18:17:16.015737 kubelet[2718]: I0527 18:17:16.015691 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8lxl\" (UniqueName: \"kubernetes.io/projected/dd5bd327-01f8-498a-b961-849e97947b28-kube-api-access-p8lxl\") pod \"coredns-668d6bf9bc-fjksh\" (UID: \"dd5bd327-01f8-498a-b961-849e97947b28\") " pod="kube-system/coredns-668d6bf9bc-fjksh" May 27 18:17:16.015737 kubelet[2718]: I0527 18:17:16.015716 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7eea353f-a1bb-479c-852d-42d62f71369b-whisker-backend-key-pair\") pod \"whisker-7648cd6b95-7jz8n\" (UID: \"7eea353f-a1bb-479c-852d-42d62f71369b\") " pod="calico-system/whisker-7648cd6b95-7jz8n" May 27 18:17:16.015895 kubelet[2718]: I0527 18:17:16.015826 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/3ce55071-e071-4152-8491-d9e57fa88f84-goldmane-key-pair\") pod \"goldmane-78d55f7ddc-fv6vw\" (UID: \"3ce55071-e071-4152-8491-d9e57fa88f84\") " pod="calico-system/goldmane-78d55f7ddc-fv6vw" May 27 18:17:16.015895 kubelet[2718]: I0527 18:17:16.015848 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/84df68e6-c45b-4638-b0fd-ffc34714916b-calico-apiserver-certs\") pod \"calico-apiserver-6b98f7c89d-tghp7\" (UID: \"84df68e6-c45b-4638-b0fd-ffc34714916b\") " pod="calico-apiserver/calico-apiserver-6b98f7c89d-tghp7" May 27 18:17:16.016017 kubelet[2718]: I0527 18:17:16.015862 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzqmc\" (UniqueName: \"kubernetes.io/projected/86e25ba7-7f97-4a6e-b732-651388f4e9ac-kube-api-access-tzqmc\") pod \"calico-apiserver-6b98f7c89d-fg2h4\" (UID: \"86e25ba7-7f97-4a6e-b732-651388f4e9ac\") " pod="calico-apiserver/calico-apiserver-6b98f7c89d-fg2h4" May 27 18:17:16.016279 kubelet[2718]: I0527 18:17:16.016092 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnq5v\" (UniqueName: \"kubernetes.io/projected/a7a54593-937f-4d42-a4fe-5eb5065d47aa-kube-api-access-pnq5v\") pod \"calico-kube-controllers-6cd666d6b8-lfdk4\" (UID: \"a7a54593-937f-4d42-a4fe-5eb5065d47aa\") " pod="calico-system/calico-kube-controllers-6cd666d6b8-lfdk4" May 27 18:17:16.016279 kubelet[2718]: I0527 18:17:16.016109 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ce55071-e071-4152-8491-d9e57fa88f84-config\") pod \"goldmane-78d55f7ddc-fv6vw\" (UID: \"3ce55071-e071-4152-8491-d9e57fa88f84\") " pod="calico-system/goldmane-78d55f7ddc-fv6vw" May 27 18:17:16.016279 kubelet[2718]: I0527 18:17:16.016123 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/86e25ba7-7f97-4a6e-b732-651388f4e9ac-calico-apiserver-certs\") pod \"calico-apiserver-6b98f7c89d-fg2h4\" (UID: \"86e25ba7-7f97-4a6e-b732-651388f4e9ac\") " pod="calico-apiserver/calico-apiserver-6b98f7c89d-fg2h4" May 27 18:17:16.016279 kubelet[2718]: I0527 18:17:16.016138 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dd5bd327-01f8-498a-b961-849e97947b28-config-volume\") pod \"coredns-668d6bf9bc-fjksh\" (UID: \"dd5bd327-01f8-498a-b961-849e97947b28\") " pod="kube-system/coredns-668d6bf9bc-fjksh" May 27 18:17:16.016279 kubelet[2718]: I0527 18:17:16.016149 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whwlr\" (UniqueName: \"kubernetes.io/projected/84df68e6-c45b-4638-b0fd-ffc34714916b-kube-api-access-whwlr\") pod \"calico-apiserver-6b98f7c89d-tghp7\" (UID: \"84df68e6-c45b-4638-b0fd-ffc34714916b\") " pod="calico-apiserver/calico-apiserver-6b98f7c89d-tghp7" May 27 18:17:16.016386 kubelet[2718]: I0527 18:17:16.016160 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq84s\" (UniqueName: \"kubernetes.io/projected/7eea353f-a1bb-479c-852d-42d62f71369b-kube-api-access-wq84s\") pod \"whisker-7648cd6b95-7jz8n\" (UID: \"7eea353f-a1bb-479c-852d-42d62f71369b\") " pod="calico-system/whisker-7648cd6b95-7jz8n" May 27 18:17:16.016386 kubelet[2718]: I0527 18:17:16.016180 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jzt8\" (UniqueName: \"kubernetes.io/projected/3ce55071-e071-4152-8491-d9e57fa88f84-kube-api-access-5jzt8\") pod \"goldmane-78d55f7ddc-fv6vw\" (UID: \"3ce55071-e071-4152-8491-d9e57fa88f84\") " pod="calico-system/goldmane-78d55f7ddc-fv6vw" May 27 18:17:16.016386 kubelet[2718]: I0527 18:17:16.016196 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7eea353f-a1bb-479c-852d-42d62f71369b-whisker-ca-bundle\") pod \"whisker-7648cd6b95-7jz8n\" (UID: \"7eea353f-a1bb-479c-852d-42d62f71369b\") " pod="calico-system/whisker-7648cd6b95-7jz8n" May 27 18:17:16.016386 kubelet[2718]: I0527 18:17:16.016210 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ce55071-e071-4152-8491-d9e57fa88f84-goldmane-ca-bundle\") pod \"goldmane-78d55f7ddc-fv6vw\" (UID: \"3ce55071-e071-4152-8491-d9e57fa88f84\") " pod="calico-system/goldmane-78d55f7ddc-fv6vw" May 27 18:17:16.016386 kubelet[2718]: I0527 18:17:16.016222 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9vrg\" (UniqueName: \"kubernetes.io/projected/0cd2286d-e934-4c13-8046-58f70aab7816-kube-api-access-w9vrg\") pod \"coredns-668d6bf9bc-wz2n4\" (UID: \"0cd2286d-e934-4c13-8046-58f70aab7816\") " pod="kube-system/coredns-668d6bf9bc-wz2n4" May 27 18:17:16.260003 containerd[1559]: time="2025-05-27T18:17:16.259876777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7648cd6b95-7jz8n,Uid:7eea353f-a1bb-479c-852d-42d62f71369b,Namespace:calico-system,Attempt:0,}" May 27 18:17:16.269836 containerd[1559]: time="2025-05-27T18:17:16.269807910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cd666d6b8-lfdk4,Uid:a7a54593-937f-4d42-a4fe-5eb5065d47aa,Namespace:calico-system,Attempt:0,}" May 27 18:17:16.274963 kubelet[2718]: E0527 18:17:16.274917 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:17:16.278945 containerd[1559]: time="2025-05-27T18:17:16.278896044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wz2n4,Uid:0cd2286d-e934-4c13-8046-58f70aab7816,Namespace:kube-system,Attempt:0,}" May 27 18:17:16.290504 kubelet[2718]: E0527 18:17:16.290389 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:17:16.291955 containerd[1559]: time="2025-05-27T18:17:16.291923961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b98f7c89d-fg2h4,Uid:86e25ba7-7f97-4a6e-b732-651388f4e9ac,Namespace:calico-apiserver,Attempt:0,}" May 27 18:17:16.292673 containerd[1559]: time="2025-05-27T18:17:16.292453891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fjksh,Uid:dd5bd327-01f8-498a-b961-849e97947b28,Namespace:kube-system,Attempt:0,}" May 27 18:17:16.296658 containerd[1559]: time="2025-05-27T18:17:16.296635434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b98f7c89d-tghp7,Uid:84df68e6-c45b-4638-b0fd-ffc34714916b,Namespace:calico-apiserver,Attempt:0,}" May 27 18:17:16.375671 containerd[1559]: time="2025-05-27T18:17:16.375539695Z" level=error msg="Failed to destroy network for sandbox \"a074c6ca57797f9d0a718c664489a3dd011ebfbf5d3c9c3508a5afcb97d4c1ae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:17:16.377525 containerd[1559]: time="2025-05-27T18:17:16.377457425Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fjksh,Uid:dd5bd327-01f8-498a-b961-849e97947b28,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a074c6ca57797f9d0a718c664489a3dd011ebfbf5d3c9c3508a5afcb97d4c1ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:17:16.378078 kubelet[2718]: E0527 18:17:16.377812 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a074c6ca57797f9d0a718c664489a3dd011ebfbf5d3c9c3508a5afcb97d4c1ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:17:16.378078 kubelet[2718]: E0527 18:17:16.377879 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a074c6ca57797f9d0a718c664489a3dd011ebfbf5d3c9c3508a5afcb97d4c1ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-fjksh" May 27 18:17:16.378078 kubelet[2718]: E0527 18:17:16.377900 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a074c6ca57797f9d0a718c664489a3dd011ebfbf5d3c9c3508a5afcb97d4c1ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-fjksh" May 27 18:17:16.379666 kubelet[2718]: E0527 18:17:16.377948 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-fjksh_kube-system(dd5bd327-01f8-498a-b961-849e97947b28)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-fjksh_kube-system(dd5bd327-01f8-498a-b961-849e97947b28)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a074c6ca57797f9d0a718c664489a3dd011ebfbf5d3c9c3508a5afcb97d4c1ae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-fjksh" podUID="dd5bd327-01f8-498a-b961-849e97947b28" May 27 18:17:16.436648 containerd[1559]: time="2025-05-27T18:17:16.436511490Z" level=error msg="Failed to destroy network for sandbox \"2395a28509c6b5fdd4da7f512ffb3b6582ea4f2ce2193d1081ba68a4ab4a34b7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:17:16.440105 systemd[1]: run-netns-cni\x2d21c6ecbc\x2d5924\x2d8d2b\x2d8096\x2dba11f3207432.mount: Deactivated successfully. May 27 18:17:16.443606 containerd[1559]: time="2025-05-27T18:17:16.443549977Z" level=error msg="Failed to destroy network for sandbox \"be1a726221be5313910e218e680943df3fbd6d2ce7a345b96cb6fb0c7427bd1e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:17:16.445390 systemd[1]: run-netns-cni\x2dfd58ed56\x2d05f7\x2de89d\x2dc33f\x2dfdebfd302530.mount: Deactivated successfully. May 27 18:17:16.445917 containerd[1559]: time="2025-05-27T18:17:16.445844801Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7648cd6b95-7jz8n,Uid:7eea353f-a1bb-479c-852d-42d62f71369b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2395a28509c6b5fdd4da7f512ffb3b6582ea4f2ce2193d1081ba68a4ab4a34b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:17:16.446545 kubelet[2718]: E0527 18:17:16.446514 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2395a28509c6b5fdd4da7f512ffb3b6582ea4f2ce2193d1081ba68a4ab4a34b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:17:16.446596 kubelet[2718]: E0527 18:17:16.446572 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2395a28509c6b5fdd4da7f512ffb3b6582ea4f2ce2193d1081ba68a4ab4a34b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7648cd6b95-7jz8n" May 27 18:17:16.446615 kubelet[2718]: E0527 18:17:16.446595 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2395a28509c6b5fdd4da7f512ffb3b6582ea4f2ce2193d1081ba68a4ab4a34b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7648cd6b95-7jz8n" May 27 18:17:16.446666 kubelet[2718]: E0527 18:17:16.446635 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7648cd6b95-7jz8n_calico-system(7eea353f-a1bb-479c-852d-42d62f71369b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7648cd6b95-7jz8n_calico-system(7eea353f-a1bb-479c-852d-42d62f71369b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2395a28509c6b5fdd4da7f512ffb3b6582ea4f2ce2193d1081ba68a4ab4a34b7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7648cd6b95-7jz8n" podUID="7eea353f-a1bb-479c-852d-42d62f71369b" May 27 18:17:16.450609 containerd[1559]: time="2025-05-27T18:17:16.449961972Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cd666d6b8-lfdk4,Uid:a7a54593-937f-4d42-a4fe-5eb5065d47aa,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"be1a726221be5313910e218e680943df3fbd6d2ce7a345b96cb6fb0c7427bd1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:17:16.451520 kubelet[2718]: E0527 18:17:16.450871 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be1a726221be5313910e218e680943df3fbd6d2ce7a345b96cb6fb0c7427bd1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:17:16.451520 kubelet[2718]: E0527 18:17:16.450905 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be1a726221be5313910e218e680943df3fbd6d2ce7a345b96cb6fb0c7427bd1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6cd666d6b8-lfdk4" May 27 18:17:16.451520 kubelet[2718]: E0527 18:17:16.450921 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be1a726221be5313910e218e680943df3fbd6d2ce7a345b96cb6fb0c7427bd1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6cd666d6b8-lfdk4" May 27 18:17:16.451646 kubelet[2718]: E0527 18:17:16.450958 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6cd666d6b8-lfdk4_calico-system(a7a54593-937f-4d42-a4fe-5eb5065d47aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6cd666d6b8-lfdk4_calico-system(a7a54593-937f-4d42-a4fe-5eb5065d47aa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"be1a726221be5313910e218e680943df3fbd6d2ce7a345b96cb6fb0c7427bd1e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6cd666d6b8-lfdk4" podUID="a7a54593-937f-4d42-a4fe-5eb5065d47aa" May 27 18:17:16.462410 containerd[1559]: time="2025-05-27T18:17:16.462372126Z" level=error msg="Failed to destroy network for sandbox \"3d8a3c2947d74256c6f3cf0427aeda200d48e79a05aaa08a490a6b92f600dcbe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:17:16.462874 containerd[1559]: time="2025-05-27T18:17:16.462376916Z" level=error msg="Failed to destroy network for sandbox \"348e9fd6bfbb4980aef7c4fab32ed50cfde10e15569fc4ebd5a8e2bb2e974c64\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:17:16.463638 containerd[1559]: time="2025-05-27T18:17:16.463614112Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b98f7c89d-tghp7,Uid:84df68e6-c45b-4638-b0fd-ffc34714916b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d8a3c2947d74256c6f3cf0427aeda200d48e79a05aaa08a490a6b92f600dcbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:17:16.463905 kubelet[2718]: E0527 18:17:16.463884 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d8a3c2947d74256c6f3cf0427aeda200d48e79a05aaa08a490a6b92f600dcbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:17:16.464695 kubelet[2718]: E0527 18:17:16.464054 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d8a3c2947d74256c6f3cf0427aeda200d48e79a05aaa08a490a6b92f600dcbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b98f7c89d-tghp7" May 27 18:17:16.464695 kubelet[2718]: E0527 18:17:16.464079 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d8a3c2947d74256c6f3cf0427aeda200d48e79a05aaa08a490a6b92f600dcbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b98f7c89d-tghp7" May 27 18:17:16.464695 kubelet[2718]: E0527 18:17:16.464117 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b98f7c89d-tghp7_calico-apiserver(84df68e6-c45b-4638-b0fd-ffc34714916b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b98f7c89d-tghp7_calico-apiserver(84df68e6-c45b-4638-b0fd-ffc34714916b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3d8a3c2947d74256c6f3cf0427aeda200d48e79a05aaa08a490a6b92f600dcbe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b98f7c89d-tghp7" podUID="84df68e6-c45b-4638-b0fd-ffc34714916b" May 27 18:17:16.466025 systemd[1]: run-netns-cni\x2d9a4d1e90\x2d29b1\x2df2d1\x2d1087\x2db7d7be7a3bc8.mount: Deactivated successfully. May 27 18:17:16.466228 systemd[1]: run-netns-cni\x2ddcad73de\x2d3579\x2da372\x2d4fd0\x2d9d80d7e9fd09.mount: Deactivated successfully. May 27 18:17:16.466430 containerd[1559]: time="2025-05-27T18:17:16.466026560Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b98f7c89d-fg2h4,Uid:86e25ba7-7f97-4a6e-b732-651388f4e9ac,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"348e9fd6bfbb4980aef7c4fab32ed50cfde10e15569fc4ebd5a8e2bb2e974c64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:17:16.466938 kubelet[2718]: E0527 18:17:16.466662 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"348e9fd6bfbb4980aef7c4fab32ed50cfde10e15569fc4ebd5a8e2bb2e974c64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:17:16.467081 kubelet[2718]: E0527 18:17:16.466690 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"348e9fd6bfbb4980aef7c4fab32ed50cfde10e15569fc4ebd5a8e2bb2e974c64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b98f7c89d-fg2h4" May 27 18:17:16.467722 kubelet[2718]: E0527 18:17:16.467183 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"348e9fd6bfbb4980aef7c4fab32ed50cfde10e15569fc4ebd5a8e2bb2e974c64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b98f7c89d-fg2h4" May 27 18:17:16.467851 kubelet[2718]: E0527 18:17:16.467798 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b98f7c89d-fg2h4_calico-apiserver(86e25ba7-7f97-4a6e-b732-651388f4e9ac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b98f7c89d-fg2h4_calico-apiserver(86e25ba7-7f97-4a6e-b732-651388f4e9ac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"348e9fd6bfbb4980aef7c4fab32ed50cfde10e15569fc4ebd5a8e2bb2e974c64\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b98f7c89d-fg2h4" podUID="86e25ba7-7f97-4a6e-b732-651388f4e9ac" May 27 18:17:16.469746 containerd[1559]: time="2025-05-27T18:17:16.469719336Z" level=error msg="Failed to destroy network for sandbox \"52ff380444ebc7be9071b90fcb76ac0b4e1038126d7b5e97631ba798a3c7da47\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:17:16.472195 containerd[1559]: time="2025-05-27T18:17:16.472148205Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wz2n4,Uid:0cd2286d-e934-4c13-8046-58f70aab7816,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"52ff380444ebc7be9071b90fcb76ac0b4e1038126d7b5e97631ba798a3c7da47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:17:16.472470 kubelet[2718]: E0527 18:17:16.472437 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52ff380444ebc7be9071b90fcb76ac0b4e1038126d7b5e97631ba798a3c7da47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:17:16.472516 kubelet[2718]: E0527 18:17:16.472502 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52ff380444ebc7be9071b90fcb76ac0b4e1038126d7b5e97631ba798a3c7da47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wz2n4" May 27 18:17:16.472549 kubelet[2718]: E0527 18:17:16.472520 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52ff380444ebc7be9071b90fcb76ac0b4e1038126d7b5e97631ba798a3c7da47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wz2n4" May 27 18:17:16.472570 kubelet[2718]: E0527 18:17:16.472550 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-wz2n4_kube-system(0cd2286d-e934-4c13-8046-58f70aab7816)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-wz2n4_kube-system(0cd2286d-e934-4c13-8046-58f70aab7816)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"52ff380444ebc7be9071b90fcb76ac0b4e1038126d7b5e97631ba798a3c7da47\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wz2n4" podUID="0cd2286d-e934-4c13-8046-58f70aab7816" May 27 18:17:16.477327 containerd[1559]: time="2025-05-27T18:17:16.477287013Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 27 18:17:17.118181 kubelet[2718]: E0527 18:17:17.117502 2718 configmap.go:193] Couldn't get configMap calico-system/goldmane-ca-bundle: failed to sync configmap cache: timed out waiting for the condition May 27 18:17:17.118181 kubelet[2718]: E0527 18:17:17.117572 2718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3ce55071-e071-4152-8491-d9e57fa88f84-goldmane-ca-bundle podName:3ce55071-e071-4152-8491-d9e57fa88f84 nodeName:}" failed. No retries permitted until 2025-05-27 18:17:17.617556861 +0000 UTC m=+25.360100932 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "goldmane-ca-bundle" (UniqueName: "kubernetes.io/configmap/3ce55071-e071-4152-8491-d9e57fa88f84-goldmane-ca-bundle") pod "goldmane-78d55f7ddc-fv6vw" (UID: "3ce55071-e071-4152-8491-d9e57fa88f84") : failed to sync configmap cache: timed out waiting for the condition May 27 18:17:17.130783 kubelet[2718]: E0527 18:17:17.130715 2718 configmap.go:193] Couldn't get configMap calico-system/goldmane: failed to sync configmap cache: timed out waiting for the condition May 27 18:17:17.130783 kubelet[2718]: E0527 18:17:17.130762 2718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3ce55071-e071-4152-8491-d9e57fa88f84-config podName:3ce55071-e071-4152-8491-d9e57fa88f84 nodeName:}" failed. No retries permitted until 2025-05-27 18:17:17.630753234 +0000 UTC m=+25.373297305 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/3ce55071-e071-4152-8491-d9e57fa88f84-config") pod "goldmane-78d55f7ddc-fv6vw" (UID: "3ce55071-e071-4152-8491-d9e57fa88f84") : failed to sync configmap cache: timed out waiting for the condition May 27 18:17:17.352588 systemd[1]: Created slice kubepods-besteffort-pod98c7e566_5a28_4427_8f66_2be6467e7c39.slice - libcontainer container kubepods-besteffort-pod98c7e566_5a28_4427_8f66_2be6467e7c39.slice. May 27 18:17:17.356266 containerd[1559]: time="2025-05-27T18:17:17.356224616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8fsl8,Uid:98c7e566-5a28-4427-8f66-2be6467e7c39,Namespace:calico-system,Attempt:0,}" May 27 18:17:17.399718 systemd[1]: run-netns-cni\x2d53bb2b25\x2d7383\x2d5291\x2d41e6\x2dc9a0d28b3150.mount: Deactivated successfully. May 27 18:17:17.441833 containerd[1559]: time="2025-05-27T18:17:17.441680625Z" level=error msg="Failed to destroy network for sandbox \"2e82104e89573150b6abdc68eb6b938df892fca6a9944fcaf33c87892469dcfb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:17:17.444906 systemd[1]: run-netns-cni\x2d83c1db8c\x2d5e22\x2dc138\x2da7a7\x2d42e78e59b406.mount: Deactivated successfully. May 27 18:17:17.447230 containerd[1559]: time="2025-05-27T18:17:17.446812820Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8fsl8,Uid:98c7e566-5a28-4427-8f66-2be6467e7c39,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e82104e89573150b6abdc68eb6b938df892fca6a9944fcaf33c87892469dcfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:17:17.447309 kubelet[2718]: E0527 18:17:17.447026 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e82104e89573150b6abdc68eb6b938df892fca6a9944fcaf33c87892469dcfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:17:17.447309 kubelet[2718]: E0527 18:17:17.447079 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e82104e89573150b6abdc68eb6b938df892fca6a9944fcaf33c87892469dcfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8fsl8" May 27 18:17:17.447309 kubelet[2718]: E0527 18:17:17.447114 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e82104e89573150b6abdc68eb6b938df892fca6a9944fcaf33c87892469dcfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8fsl8" May 27 18:17:17.447404 kubelet[2718]: E0527 18:17:17.447145 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8fsl8_calico-system(98c7e566-5a28-4427-8f66-2be6467e7c39)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8fsl8_calico-system(98c7e566-5a28-4427-8f66-2be6467e7c39)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2e82104e89573150b6abdc68eb6b938df892fca6a9944fcaf33c87892469dcfb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8fsl8" podUID="98c7e566-5a28-4427-8f66-2be6467e7c39" May 27 18:17:17.803437 containerd[1559]: time="2025-05-27T18:17:17.803364451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-fv6vw,Uid:3ce55071-e071-4152-8491-d9e57fa88f84,Namespace:calico-system,Attempt:0,}" May 27 18:17:17.897674 containerd[1559]: time="2025-05-27T18:17:17.897533689Z" level=error msg="Failed to destroy network for sandbox \"e2d37d29c7ef6ce28653b41f94e171daceaf088873bd11191f16eba0041b1f70\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:17:17.899507 systemd[1]: run-netns-cni\x2dd8803abe\x2df276\x2d8e0d\x2dd3f9\x2dbeed8ebde0ee.mount: Deactivated successfully. May 27 18:17:17.902587 containerd[1559]: time="2025-05-27T18:17:17.902554851Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-fv6vw,Uid:3ce55071-e071-4152-8491-d9e57fa88f84,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2d37d29c7ef6ce28653b41f94e171daceaf088873bd11191f16eba0041b1f70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:17:17.904025 kubelet[2718]: E0527 18:17:17.902999 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2d37d29c7ef6ce28653b41f94e171daceaf088873bd11191f16eba0041b1f70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 18:17:17.904025 kubelet[2718]: E0527 18:17:17.903081 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2d37d29c7ef6ce28653b41f94e171daceaf088873bd11191f16eba0041b1f70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-fv6vw" May 27 18:17:17.904025 kubelet[2718]: E0527 18:17:17.903104 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2d37d29c7ef6ce28653b41f94e171daceaf088873bd11191f16eba0041b1f70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-fv6vw" May 27 18:17:17.904147 kubelet[2718]: E0527 18:17:17.903150 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-78d55f7ddc-fv6vw_calico-system(3ce55071-e071-4152-8491-d9e57fa88f84)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-78d55f7ddc-fv6vw_calico-system(3ce55071-e071-4152-8491-d9e57fa88f84)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e2d37d29c7ef6ce28653b41f94e171daceaf088873bd11191f16eba0041b1f70\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-fv6vw" podUID="3ce55071-e071-4152-8491-d9e57fa88f84" May 27 18:17:20.096129 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2893060390.mount: Deactivated successfully. May 27 18:17:20.123672 containerd[1559]: time="2025-05-27T18:17:20.123634566Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:17:20.124849 containerd[1559]: time="2025-05-27T18:17:20.124807719Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=156396372" May 27 18:17:20.125833 containerd[1559]: time="2025-05-27T18:17:20.125769387Z" level=info msg="ImageCreate event name:\"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:17:20.127400 containerd[1559]: time="2025-05-27T18:17:20.127371713Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:17:20.127883 containerd[1559]: time="2025-05-27T18:17:20.127855936Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"156396234\" in 3.650541142s" May 27 18:17:20.127934 containerd[1559]: time="2025-05-27T18:17:20.127886807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\"" May 27 18:17:20.145306 containerd[1559]: time="2025-05-27T18:17:20.145240251Z" level=info msg="CreateContainer within sandbox \"ad3310328d0fbe0dca728e69835341b988defa769637ad8a1793019e285141a6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 27 18:17:20.157025 containerd[1559]: time="2025-05-27T18:17:20.153258248Z" level=info msg="Container 7588483e82e5c24bd19223e7b68d6960aa2ddb73f52851dfe5abd2e2046acb63: CDI devices from CRI Config.CDIDevices: []" May 27 18:17:20.169395 containerd[1559]: time="2025-05-27T18:17:20.169356266Z" level=info msg="CreateContainer within sandbox \"ad3310328d0fbe0dca728e69835341b988defa769637ad8a1793019e285141a6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7588483e82e5c24bd19223e7b68d6960aa2ddb73f52851dfe5abd2e2046acb63\"" May 27 18:17:20.170358 containerd[1559]: time="2025-05-27T18:17:20.170063086Z" level=info msg="StartContainer for \"7588483e82e5c24bd19223e7b68d6960aa2ddb73f52851dfe5abd2e2046acb63\"" May 27 18:17:20.171484 containerd[1559]: time="2025-05-27T18:17:20.171462566Z" level=info msg="connecting to shim 7588483e82e5c24bd19223e7b68d6960aa2ddb73f52851dfe5abd2e2046acb63" address="unix:///run/containerd/s/c7a2408691f90086130fae7378da96626f933907c34017b7168bbc8347c577d6" protocol=ttrpc version=3 May 27 18:17:20.216090 systemd[1]: Started cri-containerd-7588483e82e5c24bd19223e7b68d6960aa2ddb73f52851dfe5abd2e2046acb63.scope - libcontainer container 7588483e82e5c24bd19223e7b68d6960aa2ddb73f52851dfe5abd2e2046acb63. May 27 18:17:20.263770 containerd[1559]: time="2025-05-27T18:17:20.263736308Z" level=info msg="StartContainer for \"7588483e82e5c24bd19223e7b68d6960aa2ddb73f52851dfe5abd2e2046acb63\" returns successfully" May 27 18:17:20.335368 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 27 18:17:20.335454 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 27 18:17:20.510088 kubelet[2718]: I0527 18:17:20.509428 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-n2rpl" podStartSLOduration=2.297565372 podStartE2EDuration="11.50941276s" podCreationTimestamp="2025-05-27 18:17:09 +0000 UTC" firstStartedPulling="2025-05-27 18:17:10.917249774 +0000 UTC m=+18.659793845" lastFinishedPulling="2025-05-27 18:17:20.129097162 +0000 UTC m=+27.871641233" observedRunningTime="2025-05-27 18:17:20.507808104 +0000 UTC m=+28.250352175" watchObservedRunningTime="2025-05-27 18:17:20.50941276 +0000 UTC m=+28.251956831" May 27 18:17:20.545052 kubelet[2718]: I0527 18:17:20.544609 2718 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7eea353f-a1bb-479c-852d-42d62f71369b-whisker-backend-key-pair\") pod \"7eea353f-a1bb-479c-852d-42d62f71369b\" (UID: \"7eea353f-a1bb-479c-852d-42d62f71369b\") " May 27 18:17:20.545052 kubelet[2718]: I0527 18:17:20.544661 2718 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7eea353f-a1bb-479c-852d-42d62f71369b-whisker-ca-bundle\") pod \"7eea353f-a1bb-479c-852d-42d62f71369b\" (UID: \"7eea353f-a1bb-479c-852d-42d62f71369b\") " May 27 18:17:20.545052 kubelet[2718]: I0527 18:17:20.544683 2718 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wq84s\" (UniqueName: \"kubernetes.io/projected/7eea353f-a1bb-479c-852d-42d62f71369b-kube-api-access-wq84s\") pod \"7eea353f-a1bb-479c-852d-42d62f71369b\" (UID: \"7eea353f-a1bb-479c-852d-42d62f71369b\") " May 27 18:17:20.546529 kubelet[2718]: I0527 18:17:20.546509 2718 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7eea353f-a1bb-479c-852d-42d62f71369b-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "7eea353f-a1bb-479c-852d-42d62f71369b" (UID: "7eea353f-a1bb-479c-852d-42d62f71369b"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 27 18:17:20.548889 kubelet[2718]: I0527 18:17:20.548828 2718 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7eea353f-a1bb-479c-852d-42d62f71369b-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "7eea353f-a1bb-479c-852d-42d62f71369b" (UID: "7eea353f-a1bb-479c-852d-42d62f71369b"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 27 18:17:20.549745 kubelet[2718]: I0527 18:17:20.549712 2718 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7eea353f-a1bb-479c-852d-42d62f71369b-kube-api-access-wq84s" (OuterVolumeSpecName: "kube-api-access-wq84s") pod "7eea353f-a1bb-479c-852d-42d62f71369b" (UID: "7eea353f-a1bb-479c-852d-42d62f71369b"). InnerVolumeSpecName "kube-api-access-wq84s". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 18:17:20.646010 kubelet[2718]: I0527 18:17:20.645906 2718 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wq84s\" (UniqueName: \"kubernetes.io/projected/7eea353f-a1bb-479c-852d-42d62f71369b-kube-api-access-wq84s\") on node \"172-237-129-174\" DevicePath \"\"" May 27 18:17:20.646010 kubelet[2718]: I0527 18:17:20.645937 2718 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7eea353f-a1bb-479c-852d-42d62f71369b-whisker-backend-key-pair\") on node \"172-237-129-174\" DevicePath \"\"" May 27 18:17:20.646010 kubelet[2718]: I0527 18:17:20.645947 2718 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7eea353f-a1bb-479c-852d-42d62f71369b-whisker-ca-bundle\") on node \"172-237-129-174\" DevicePath \"\"" May 27 18:17:20.813507 systemd[1]: Removed slice kubepods-besteffort-pod7eea353f_a1bb_479c_852d_42d62f71369b.slice - libcontainer container kubepods-besteffort-pod7eea353f_a1bb_479c_852d_42d62f71369b.slice. May 27 18:17:20.867798 systemd[1]: Created slice kubepods-besteffort-pod3bec5c37_47a4_48c5_93f9_be68e4cfdd20.slice - libcontainer container kubepods-besteffort-pod3bec5c37_47a4_48c5_93f9_be68e4cfdd20.slice. May 27 18:17:20.947910 kubelet[2718]: I0527 18:17:20.947848 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl6z7\" (UniqueName: \"kubernetes.io/projected/3bec5c37-47a4-48c5-93f9-be68e4cfdd20-kube-api-access-pl6z7\") pod \"whisker-5c45797667-dfdqs\" (UID: \"3bec5c37-47a4-48c5-93f9-be68e4cfdd20\") " pod="calico-system/whisker-5c45797667-dfdqs" May 27 18:17:20.947910 kubelet[2718]: I0527 18:17:20.947892 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3bec5c37-47a4-48c5-93f9-be68e4cfdd20-whisker-backend-key-pair\") pod \"whisker-5c45797667-dfdqs\" (UID: \"3bec5c37-47a4-48c5-93f9-be68e4cfdd20\") " pod="calico-system/whisker-5c45797667-dfdqs" May 27 18:17:20.947910 kubelet[2718]: I0527 18:17:20.947917 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3bec5c37-47a4-48c5-93f9-be68e4cfdd20-whisker-ca-bundle\") pod \"whisker-5c45797667-dfdqs\" (UID: \"3bec5c37-47a4-48c5-93f9-be68e4cfdd20\") " pod="calico-system/whisker-5c45797667-dfdqs" May 27 18:17:21.065540 containerd[1559]: time="2025-05-27T18:17:21.065365106Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7588483e82e5c24bd19223e7b68d6960aa2ddb73f52851dfe5abd2e2046acb63\" id:\"749bea3c3c8166baed6ff4f417191aa4999c3032a53448f86da44db4651967b6\" pid:3800 exit_status:1 exited_at:{seconds:1748369841 nanos:65067378}" May 27 18:17:21.100660 systemd[1]: var-lib-kubelet-pods-7eea353f\x2da1bb\x2d479c\x2d852d\x2d42d62f71369b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwq84s.mount: Deactivated successfully. May 27 18:17:21.100795 systemd[1]: var-lib-kubelet-pods-7eea353f\x2da1bb\x2d479c\x2d852d\x2d42d62f71369b-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 27 18:17:21.147576 containerd[1559]: time="2025-05-27T18:17:21.147527778Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7588483e82e5c24bd19223e7b68d6960aa2ddb73f52851dfe5abd2e2046acb63\" id:\"0b734664ad1e9dc9d0cdb95354b855acadd9e7abe929697a7916c9addee6c59e\" pid:3827 exit_status:1 exited_at:{seconds:1748369841 nanos:147216470}" May 27 18:17:21.171428 containerd[1559]: time="2025-05-27T18:17:21.171396294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c45797667-dfdqs,Uid:3bec5c37-47a4-48c5-93f9-be68e4cfdd20,Namespace:calico-system,Attempt:0,}" May 27 18:17:21.305371 systemd-networkd[1451]: caliad4915f3dab: Link UP May 27 18:17:21.306831 systemd-networkd[1451]: caliad4915f3dab: Gained carrier May 27 18:17:21.329064 containerd[1559]: 2025-05-27 18:17:21.199 [INFO][3838] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 18:17:21.329064 containerd[1559]: 2025-05-27 18:17:21.234 [INFO][3838] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--129--174-k8s-whisker--5c45797667--dfdqs-eth0 whisker-5c45797667- calico-system 3bec5c37-47a4-48c5-93f9-be68e4cfdd20 903 0 2025-05-27 18:17:20 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5c45797667 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-237-129-174 whisker-5c45797667-dfdqs eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] caliad4915f3dab [] [] }} ContainerID="d038c45386e6354e3f60113150a75a0b8e0335b011360b4c69f4b5505f001bb6" Namespace="calico-system" Pod="whisker-5c45797667-dfdqs" WorkloadEndpoint="172--237--129--174-k8s-whisker--5c45797667--dfdqs-" May 27 18:17:21.329064 containerd[1559]: 2025-05-27 18:17:21.235 [INFO][3838] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d038c45386e6354e3f60113150a75a0b8e0335b011360b4c69f4b5505f001bb6" Namespace="calico-system" Pod="whisker-5c45797667-dfdqs" WorkloadEndpoint="172--237--129--174-k8s-whisker--5c45797667--dfdqs-eth0" May 27 18:17:21.329064 containerd[1559]: 2025-05-27 18:17:21.259 [INFO][3850] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d038c45386e6354e3f60113150a75a0b8e0335b011360b4c69f4b5505f001bb6" HandleID="k8s-pod-network.d038c45386e6354e3f60113150a75a0b8e0335b011360b4c69f4b5505f001bb6" Workload="172--237--129--174-k8s-whisker--5c45797667--dfdqs-eth0" May 27 18:17:21.329668 containerd[1559]: 2025-05-27 18:17:21.259 [INFO][3850] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d038c45386e6354e3f60113150a75a0b8e0335b011360b4c69f4b5505f001bb6" HandleID="k8s-pod-network.d038c45386e6354e3f60113150a75a0b8e0335b011360b4c69f4b5505f001bb6" Workload="172--237--129--174-k8s-whisker--5c45797667--dfdqs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9940), Attrs:map[string]string{"namespace":"calico-system", "node":"172-237-129-174", "pod":"whisker-5c45797667-dfdqs", "timestamp":"2025-05-27 18:17:21.259082484 +0000 UTC"}, Hostname:"172-237-129-174", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 18:17:21.329668 containerd[1559]: 2025-05-27 18:17:21.259 [INFO][3850] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 18:17:21.329668 containerd[1559]: 2025-05-27 18:17:21.259 [INFO][3850] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 18:17:21.329668 containerd[1559]: 2025-05-27 18:17:21.259 [INFO][3850] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-129-174' May 27 18:17:21.329668 containerd[1559]: 2025-05-27 18:17:21.264 [INFO][3850] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d038c45386e6354e3f60113150a75a0b8e0335b011360b4c69f4b5505f001bb6" host="172-237-129-174" May 27 18:17:21.329668 containerd[1559]: 2025-05-27 18:17:21.268 [INFO][3850] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-129-174" May 27 18:17:21.329668 containerd[1559]: 2025-05-27 18:17:21.271 [INFO][3850] ipam/ipam.go 511: Trying affinity for 192.168.109.192/26 host="172-237-129-174" May 27 18:17:21.329668 containerd[1559]: 2025-05-27 18:17:21.275 [INFO][3850] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.192/26 host="172-237-129-174" May 27 18:17:21.329668 containerd[1559]: 2025-05-27 18:17:21.277 [INFO][3850] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.192/26 host="172-237-129-174" May 27 18:17:21.329810 containerd[1559]: 2025-05-27 18:17:21.277 [INFO][3850] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.109.192/26 handle="k8s-pod-network.d038c45386e6354e3f60113150a75a0b8e0335b011360b4c69f4b5505f001bb6" host="172-237-129-174" May 27 18:17:21.329810 containerd[1559]: 2025-05-27 18:17:21.278 [INFO][3850] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d038c45386e6354e3f60113150a75a0b8e0335b011360b4c69f4b5505f001bb6 May 27 18:17:21.329810 containerd[1559]: 2025-05-27 18:17:21.282 [INFO][3850] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.109.192/26 handle="k8s-pod-network.d038c45386e6354e3f60113150a75a0b8e0335b011360b4c69f4b5505f001bb6" host="172-237-129-174" May 27 18:17:21.329810 containerd[1559]: 2025-05-27 18:17:21.286 [INFO][3850] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.109.193/26] block=192.168.109.192/26 handle="k8s-pod-network.d038c45386e6354e3f60113150a75a0b8e0335b011360b4c69f4b5505f001bb6" host="172-237-129-174" May 27 18:17:21.329810 containerd[1559]: 2025-05-27 18:17:21.286 [INFO][3850] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.193/26] handle="k8s-pod-network.d038c45386e6354e3f60113150a75a0b8e0335b011360b4c69f4b5505f001bb6" host="172-237-129-174" May 27 18:17:21.329810 containerd[1559]: 2025-05-27 18:17:21.286 [INFO][3850] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 18:17:21.329810 containerd[1559]: 2025-05-27 18:17:21.286 [INFO][3850] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.193/26] IPv6=[] ContainerID="d038c45386e6354e3f60113150a75a0b8e0335b011360b4c69f4b5505f001bb6" HandleID="k8s-pod-network.d038c45386e6354e3f60113150a75a0b8e0335b011360b4c69f4b5505f001bb6" Workload="172--237--129--174-k8s-whisker--5c45797667--dfdqs-eth0" May 27 18:17:21.329909 containerd[1559]: 2025-05-27 18:17:21.292 [INFO][3838] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d038c45386e6354e3f60113150a75a0b8e0335b011360b4c69f4b5505f001bb6" Namespace="calico-system" Pod="whisker-5c45797667-dfdqs" WorkloadEndpoint="172--237--129--174-k8s-whisker--5c45797667--dfdqs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--129--174-k8s-whisker--5c45797667--dfdqs-eth0", GenerateName:"whisker-5c45797667-", Namespace:"calico-system", SelfLink:"", UID:"3bec5c37-47a4-48c5-93f9-be68e4cfdd20", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 18, 17, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5c45797667", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-129-174", ContainerID:"", Pod:"whisker-5c45797667-dfdqs", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.109.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliad4915f3dab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 18:17:21.329909 containerd[1559]: 2025-05-27 18:17:21.292 [INFO][3838] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.193/32] ContainerID="d038c45386e6354e3f60113150a75a0b8e0335b011360b4c69f4b5505f001bb6" Namespace="calico-system" Pod="whisker-5c45797667-dfdqs" WorkloadEndpoint="172--237--129--174-k8s-whisker--5c45797667--dfdqs-eth0" May 27 18:17:21.329965 containerd[1559]: 2025-05-27 18:17:21.292 [INFO][3838] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliad4915f3dab ContainerID="d038c45386e6354e3f60113150a75a0b8e0335b011360b4c69f4b5505f001bb6" Namespace="calico-system" Pod="whisker-5c45797667-dfdqs" WorkloadEndpoint="172--237--129--174-k8s-whisker--5c45797667--dfdqs-eth0" May 27 18:17:21.329965 containerd[1559]: 2025-05-27 18:17:21.303 [INFO][3838] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d038c45386e6354e3f60113150a75a0b8e0335b011360b4c69f4b5505f001bb6" Namespace="calico-system" Pod="whisker-5c45797667-dfdqs" WorkloadEndpoint="172--237--129--174-k8s-whisker--5c45797667--dfdqs-eth0" May 27 18:17:21.331758 containerd[1559]: 2025-05-27 18:17:21.303 [INFO][3838] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d038c45386e6354e3f60113150a75a0b8e0335b011360b4c69f4b5505f001bb6" Namespace="calico-system" Pod="whisker-5c45797667-dfdqs" WorkloadEndpoint="172--237--129--174-k8s-whisker--5c45797667--dfdqs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--129--174-k8s-whisker--5c45797667--dfdqs-eth0", GenerateName:"whisker-5c45797667-", Namespace:"calico-system", SelfLink:"", UID:"3bec5c37-47a4-48c5-93f9-be68e4cfdd20", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 18, 17, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5c45797667", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-129-174", ContainerID:"d038c45386e6354e3f60113150a75a0b8e0335b011360b4c69f4b5505f001bb6", Pod:"whisker-5c45797667-dfdqs", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.109.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliad4915f3dab", MAC:"ba:e0:7b:85:ff:66", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 18:17:21.331805 containerd[1559]: 2025-05-27 18:17:21.324 [INFO][3838] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d038c45386e6354e3f60113150a75a0b8e0335b011360b4c69f4b5505f001bb6" Namespace="calico-system" Pod="whisker-5c45797667-dfdqs" WorkloadEndpoint="172--237--129--174-k8s-whisker--5c45797667--dfdqs-eth0" May 27 18:17:21.366035 containerd[1559]: time="2025-05-27T18:17:21.365952475Z" level=info msg="connecting to shim d038c45386e6354e3f60113150a75a0b8e0335b011360b4c69f4b5505f001bb6" address="unix:///run/containerd/s/edb786031802d859414bf3b5a30c698d79cc11a136ddce6d7553e75f1d1b9952" namespace=k8s.io protocol=ttrpc version=3 May 27 18:17:21.395096 systemd[1]: Started cri-containerd-d038c45386e6354e3f60113150a75a0b8e0335b011360b4c69f4b5505f001bb6.scope - libcontainer container d038c45386e6354e3f60113150a75a0b8e0335b011360b4c69f4b5505f001bb6. May 27 18:17:21.443939 containerd[1559]: time="2025-05-27T18:17:21.443918364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c45797667-dfdqs,Uid:3bec5c37-47a4-48c5-93f9-be68e4cfdd20,Namespace:calico-system,Attempt:0,} returns sandbox id \"d038c45386e6354e3f60113150a75a0b8e0335b011360b4c69f4b5505f001bb6\"" May 27 18:17:21.445814 containerd[1559]: time="2025-05-27T18:17:21.445765813Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 18:17:21.566462 containerd[1559]: time="2025-05-27T18:17:21.566412552Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7588483e82e5c24bd19223e7b68d6960aa2ddb73f52851dfe5abd2e2046acb63\" id:\"44c655bb0a3229acaf219881b3d4b596bcab88dc1f08a7d2d756c74756bcf03e\" pid:3923 exit_status:1 exited_at:{seconds:1748369841 nanos:566018151}" May 27 18:17:21.631162 containerd[1559]: time="2025-05-27T18:17:21.631071347Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 18:17:21.632173 containerd[1559]: time="2025-05-27T18:17:21.632133465Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 18:17:21.632612 containerd[1559]: time="2025-05-27T18:17:21.632160436Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 18:17:21.632650 kubelet[2718]: E0527 18:17:21.632331 2718 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 18:17:21.632650 kubelet[2718]: E0527 18:17:21.632374 2718 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 18:17:21.634278 kubelet[2718]: E0527 18:17:21.632516 2718 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e2106e42b1444a1ea1099c7150d402b7,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pl6z7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5c45797667-dfdqs_calico-system(3bec5c37-47a4-48c5-93f9-be68e4cfdd20): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 18:17:21.636658 containerd[1559]: time="2025-05-27T18:17:21.636622695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 18:17:21.814603 containerd[1559]: time="2025-05-27T18:17:21.814479639Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 18:17:21.815856 containerd[1559]: time="2025-05-27T18:17:21.815782843Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 18:17:21.816216 containerd[1559]: time="2025-05-27T18:17:21.815847215Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 18:17:21.816616 kubelet[2718]: E0527 18:17:21.816557 2718 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 18:17:21.816734 kubelet[2718]: E0527 18:17:21.816626 2718 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 18:17:21.816776 kubelet[2718]: E0527 18:17:21.816722 2718 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pl6z7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5c45797667-dfdqs_calico-system(3bec5c37-47a4-48c5-93f9-be68e4cfdd20): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 18:17:21.818129 kubelet[2718]: E0527 18:17:21.818042 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-5c45797667-dfdqs" podUID="3bec5c37-47a4-48c5-93f9-be68e4cfdd20" May 27 18:17:22.347134 kubelet[2718]: I0527 18:17:22.347089 2718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7eea353f-a1bb-479c-852d-42d62f71369b" path="/var/lib/kubelet/pods/7eea353f-a1bb-479c-852d-42d62f71369b/volumes" May 27 18:17:22.503993 kubelet[2718]: E0527 18:17:22.503039 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-5c45797667-dfdqs" podUID="3bec5c37-47a4-48c5-93f9-be68e4cfdd20" May 27 18:17:22.597363 containerd[1559]: time="2025-05-27T18:17:22.597185671Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7588483e82e5c24bd19223e7b68d6960aa2ddb73f52851dfe5abd2e2046acb63\" id:\"f9c92248800bd0fcda5bfe70c0e75d0a919585fd2ef4e9ede10271e4985f9140\" pid:4041 exit_status:1 exited_at:{seconds:1748369842 nanos:596822422}" May 27 18:17:23.274303 systemd-networkd[1451]: caliad4915f3dab: Gained IPv6LL May 27 18:17:28.352396 containerd[1559]: time="2025-05-27T18:17:28.352339934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8fsl8,Uid:98c7e566-5a28-4427-8f66-2be6467e7c39,Namespace:calico-system,Attempt:0,}" May 27 18:17:28.352940 containerd[1559]: time="2025-05-27T18:17:28.352919395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b98f7c89d-fg2h4,Uid:86e25ba7-7f97-4a6e-b732-651388f4e9ac,Namespace:calico-apiserver,Attempt:0,}" May 27 18:17:28.504734 systemd-networkd[1451]: cali215e0f3b282: Link UP May 27 18:17:28.504955 systemd-networkd[1451]: cali215e0f3b282: Gained carrier May 27 18:17:28.522731 containerd[1559]: 2025-05-27 18:17:28.405 [INFO][4181] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 18:17:28.522731 containerd[1559]: 2025-05-27 18:17:28.427 [INFO][4181] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--129--174-k8s-calico--apiserver--6b98f7c89d--fg2h4-eth0 calico-apiserver-6b98f7c89d- calico-apiserver 86e25ba7-7f97-4a6e-b732-651388f4e9ac 837 0 2025-05-27 18:17:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b98f7c89d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-237-129-174 calico-apiserver-6b98f7c89d-fg2h4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali215e0f3b282 [] [] }} ContainerID="cb7274f9bb2a81db536417804159ae4b510e996ec77869d994b141dc521d866b" Namespace="calico-apiserver" Pod="calico-apiserver-6b98f7c89d-fg2h4" WorkloadEndpoint="172--237--129--174-k8s-calico--apiserver--6b98f7c89d--fg2h4-" May 27 18:17:28.522731 containerd[1559]: 2025-05-27 18:17:28.429 [INFO][4181] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cb7274f9bb2a81db536417804159ae4b510e996ec77869d994b141dc521d866b" Namespace="calico-apiserver" Pod="calico-apiserver-6b98f7c89d-fg2h4" WorkloadEndpoint="172--237--129--174-k8s-calico--apiserver--6b98f7c89d--fg2h4-eth0" May 27 18:17:28.522731 containerd[1559]: 2025-05-27 18:17:28.459 [INFO][4207] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cb7274f9bb2a81db536417804159ae4b510e996ec77869d994b141dc521d866b" HandleID="k8s-pod-network.cb7274f9bb2a81db536417804159ae4b510e996ec77869d994b141dc521d866b" Workload="172--237--129--174-k8s-calico--apiserver--6b98f7c89d--fg2h4-eth0" May 27 18:17:28.523269 containerd[1559]: 2025-05-27 18:17:28.459 [INFO][4207] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cb7274f9bb2a81db536417804159ae4b510e996ec77869d994b141dc521d866b" HandleID="k8s-pod-network.cb7274f9bb2a81db536417804159ae4b510e996ec77869d994b141dc521d866b" Workload="172--237--129--174-k8s-calico--apiserver--6b98f7c89d--fg2h4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d91b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-237-129-174", "pod":"calico-apiserver-6b98f7c89d-fg2h4", "timestamp":"2025-05-27 18:17:28.459717157 +0000 UTC"}, Hostname:"172-237-129-174", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 18:17:28.523269 containerd[1559]: 2025-05-27 18:17:28.460 [INFO][4207] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 18:17:28.523269 containerd[1559]: 2025-05-27 18:17:28.460 [INFO][4207] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 18:17:28.523269 containerd[1559]: 2025-05-27 18:17:28.460 [INFO][4207] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-129-174' May 27 18:17:28.523269 containerd[1559]: 2025-05-27 18:17:28.471 [INFO][4207] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cb7274f9bb2a81db536417804159ae4b510e996ec77869d994b141dc521d866b" host="172-237-129-174" May 27 18:17:28.523269 containerd[1559]: 2025-05-27 18:17:28.476 [INFO][4207] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-129-174" May 27 18:17:28.523269 containerd[1559]: 2025-05-27 18:17:28.480 [INFO][4207] ipam/ipam.go 511: Trying affinity for 192.168.109.192/26 host="172-237-129-174" May 27 18:17:28.523269 containerd[1559]: 2025-05-27 18:17:28.482 [INFO][4207] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.192/26 host="172-237-129-174" May 27 18:17:28.523269 containerd[1559]: 2025-05-27 18:17:28.484 [INFO][4207] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.192/26 host="172-237-129-174" May 27 18:17:28.523462 containerd[1559]: 2025-05-27 18:17:28.484 [INFO][4207] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.109.192/26 handle="k8s-pod-network.cb7274f9bb2a81db536417804159ae4b510e996ec77869d994b141dc521d866b" host="172-237-129-174" May 27 18:17:28.523462 containerd[1559]: 2025-05-27 18:17:28.485 [INFO][4207] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cb7274f9bb2a81db536417804159ae4b510e996ec77869d994b141dc521d866b May 27 18:17:28.523462 containerd[1559]: 2025-05-27 18:17:28.489 [INFO][4207] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.109.192/26 handle="k8s-pod-network.cb7274f9bb2a81db536417804159ae4b510e996ec77869d994b141dc521d866b" host="172-237-129-174" May 27 18:17:28.523462 containerd[1559]: 2025-05-27 18:17:28.494 [INFO][4207] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.109.194/26] block=192.168.109.192/26 handle="k8s-pod-network.cb7274f9bb2a81db536417804159ae4b510e996ec77869d994b141dc521d866b" host="172-237-129-174" May 27 18:17:28.523462 containerd[1559]: 2025-05-27 18:17:28.494 [INFO][4207] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.194/26] handle="k8s-pod-network.cb7274f9bb2a81db536417804159ae4b510e996ec77869d994b141dc521d866b" host="172-237-129-174" May 27 18:17:28.523462 containerd[1559]: 2025-05-27 18:17:28.494 [INFO][4207] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 18:17:28.523462 containerd[1559]: 2025-05-27 18:17:28.494 [INFO][4207] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.194/26] IPv6=[] ContainerID="cb7274f9bb2a81db536417804159ae4b510e996ec77869d994b141dc521d866b" HandleID="k8s-pod-network.cb7274f9bb2a81db536417804159ae4b510e996ec77869d994b141dc521d866b" Workload="172--237--129--174-k8s-calico--apiserver--6b98f7c89d--fg2h4-eth0" May 27 18:17:28.523591 containerd[1559]: 2025-05-27 18:17:28.498 [INFO][4181] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cb7274f9bb2a81db536417804159ae4b510e996ec77869d994b141dc521d866b" Namespace="calico-apiserver" Pod="calico-apiserver-6b98f7c89d-fg2h4" WorkloadEndpoint="172--237--129--174-k8s-calico--apiserver--6b98f7c89d--fg2h4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--129--174-k8s-calico--apiserver--6b98f7c89d--fg2h4-eth0", GenerateName:"calico-apiserver-6b98f7c89d-", Namespace:"calico-apiserver", SelfLink:"", UID:"86e25ba7-7f97-4a6e-b732-651388f4e9ac", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 18, 17, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b98f7c89d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-129-174", ContainerID:"", Pod:"calico-apiserver-6b98f7c89d-fg2h4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali215e0f3b282", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 18:17:28.523642 containerd[1559]: 2025-05-27 18:17:28.498 [INFO][4181] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.194/32] ContainerID="cb7274f9bb2a81db536417804159ae4b510e996ec77869d994b141dc521d866b" Namespace="calico-apiserver" Pod="calico-apiserver-6b98f7c89d-fg2h4" WorkloadEndpoint="172--237--129--174-k8s-calico--apiserver--6b98f7c89d--fg2h4-eth0" May 27 18:17:28.523642 containerd[1559]: 2025-05-27 18:17:28.498 [INFO][4181] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali215e0f3b282 ContainerID="cb7274f9bb2a81db536417804159ae4b510e996ec77869d994b141dc521d866b" Namespace="calico-apiserver" Pod="calico-apiserver-6b98f7c89d-fg2h4" WorkloadEndpoint="172--237--129--174-k8s-calico--apiserver--6b98f7c89d--fg2h4-eth0" May 27 18:17:28.523642 containerd[1559]: 2025-05-27 18:17:28.503 [INFO][4181] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cb7274f9bb2a81db536417804159ae4b510e996ec77869d994b141dc521d866b" Namespace="calico-apiserver" Pod="calico-apiserver-6b98f7c89d-fg2h4" WorkloadEndpoint="172--237--129--174-k8s-calico--apiserver--6b98f7c89d--fg2h4-eth0" May 27 18:17:28.525074 containerd[1559]: 2025-05-27 18:17:28.504 [INFO][4181] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cb7274f9bb2a81db536417804159ae4b510e996ec77869d994b141dc521d866b" Namespace="calico-apiserver" Pod="calico-apiserver-6b98f7c89d-fg2h4" WorkloadEndpoint="172--237--129--174-k8s-calico--apiserver--6b98f7c89d--fg2h4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--129--174-k8s-calico--apiserver--6b98f7c89d--fg2h4-eth0", GenerateName:"calico-apiserver-6b98f7c89d-", Namespace:"calico-apiserver", SelfLink:"", UID:"86e25ba7-7f97-4a6e-b732-651388f4e9ac", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 18, 17, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b98f7c89d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-129-174", ContainerID:"cb7274f9bb2a81db536417804159ae4b510e996ec77869d994b141dc521d866b", Pod:"calico-apiserver-6b98f7c89d-fg2h4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali215e0f3b282", MAC:"ee:02:f1:48:6a:3b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 18:17:28.525163 containerd[1559]: 2025-05-27 18:17:28.518 [INFO][4181] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cb7274f9bb2a81db536417804159ae4b510e996ec77869d994b141dc521d866b" Namespace="calico-apiserver" Pod="calico-apiserver-6b98f7c89d-fg2h4" WorkloadEndpoint="172--237--129--174-k8s-calico--apiserver--6b98f7c89d--fg2h4-eth0" May 27 18:17:28.558059 containerd[1559]: time="2025-05-27T18:17:28.556832263Z" level=info msg="connecting to shim cb7274f9bb2a81db536417804159ae4b510e996ec77869d994b141dc521d866b" address="unix:///run/containerd/s/107f225cd027c2c1311bcad8ed6cfd19fd05529e6057be3d8206b0addeeeb02f" namespace=k8s.io protocol=ttrpc version=3 May 27 18:17:28.561524 systemd-networkd[1451]: cali3d4d7809bf8: Link UP May 27 18:17:28.562630 systemd-networkd[1451]: cali3d4d7809bf8: Gained carrier May 27 18:17:28.586860 containerd[1559]: 2025-05-27 18:17:28.417 [INFO][4175] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 18:17:28.586860 containerd[1559]: 2025-05-27 18:17:28.430 [INFO][4175] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--129--174-k8s-csi--node--driver--8fsl8-eth0 csi-node-driver- calico-system 98c7e566-5a28-4427-8f66-2be6467e7c39 737 0 2025-05-27 18:17:10 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78f6f74485 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-237-129-174 csi-node-driver-8fsl8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3d4d7809bf8 [] [] }} ContainerID="267a6fea59bec2347481e60cb779ad41366dedbad8792364d20db65562d3d201" Namespace="calico-system" Pod="csi-node-driver-8fsl8" WorkloadEndpoint="172--237--129--174-k8s-csi--node--driver--8fsl8-" May 27 18:17:28.586860 containerd[1559]: 2025-05-27 18:17:28.430 [INFO][4175] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="267a6fea59bec2347481e60cb779ad41366dedbad8792364d20db65562d3d201" Namespace="calico-system" Pod="csi-node-driver-8fsl8" WorkloadEndpoint="172--237--129--174-k8s-csi--node--driver--8fsl8-eth0" May 27 18:17:28.586860 containerd[1559]: 2025-05-27 18:17:28.476 [INFO][4205] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="267a6fea59bec2347481e60cb779ad41366dedbad8792364d20db65562d3d201" HandleID="k8s-pod-network.267a6fea59bec2347481e60cb779ad41366dedbad8792364d20db65562d3d201" Workload="172--237--129--174-k8s-csi--node--driver--8fsl8-eth0" May 27 18:17:28.587318 containerd[1559]: 2025-05-27 18:17:28.477 [INFO][4205] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="267a6fea59bec2347481e60cb779ad41366dedbad8792364d20db65562d3d201" HandleID="k8s-pod-network.267a6fea59bec2347481e60cb779ad41366dedbad8792364d20db65562d3d201" Workload="172--237--129--174-k8s-csi--node--driver--8fsl8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad4d0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-237-129-174", "pod":"csi-node-driver-8fsl8", "timestamp":"2025-05-27 18:17:28.475954696 +0000 UTC"}, Hostname:"172-237-129-174", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 18:17:28.587318 containerd[1559]: 2025-05-27 18:17:28.477 [INFO][4205] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 18:17:28.587318 containerd[1559]: 2025-05-27 18:17:28.494 [INFO][4205] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 18:17:28.587318 containerd[1559]: 2025-05-27 18:17:28.494 [INFO][4205] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-129-174' May 27 18:17:28.587318 containerd[1559]: 2025-05-27 18:17:28.501 [INFO][4205] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.267a6fea59bec2347481e60cb779ad41366dedbad8792364d20db65562d3d201" host="172-237-129-174" May 27 18:17:28.587318 containerd[1559]: 2025-05-27 18:17:28.512 [INFO][4205] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-129-174" May 27 18:17:28.587318 containerd[1559]: 2025-05-27 18:17:28.525 [INFO][4205] ipam/ipam.go 511: Trying affinity for 192.168.109.192/26 host="172-237-129-174" May 27 18:17:28.587318 containerd[1559]: 2025-05-27 18:17:28.527 [INFO][4205] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.192/26 host="172-237-129-174" May 27 18:17:28.587318 containerd[1559]: 2025-05-27 18:17:28.529 [INFO][4205] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.192/26 host="172-237-129-174" May 27 18:17:28.587318 containerd[1559]: 2025-05-27 18:17:28.530 [INFO][4205] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.109.192/26 handle="k8s-pod-network.267a6fea59bec2347481e60cb779ad41366dedbad8792364d20db65562d3d201" host="172-237-129-174" May 27 18:17:28.587572 containerd[1559]: 2025-05-27 18:17:28.531 [INFO][4205] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.267a6fea59bec2347481e60cb779ad41366dedbad8792364d20db65562d3d201 May 27 18:17:28.587572 containerd[1559]: 2025-05-27 18:17:28.535 [INFO][4205] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.109.192/26 handle="k8s-pod-network.267a6fea59bec2347481e60cb779ad41366dedbad8792364d20db65562d3d201" host="172-237-129-174" May 27 18:17:28.587572 containerd[1559]: 2025-05-27 18:17:28.541 [INFO][4205] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.109.195/26] block=192.168.109.192/26 handle="k8s-pod-network.267a6fea59bec2347481e60cb779ad41366dedbad8792364d20db65562d3d201" host="172-237-129-174" May 27 18:17:28.587572 containerd[1559]: 2025-05-27 18:17:28.541 [INFO][4205] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.195/26] handle="k8s-pod-network.267a6fea59bec2347481e60cb779ad41366dedbad8792364d20db65562d3d201" host="172-237-129-174" May 27 18:17:28.587572 containerd[1559]: 2025-05-27 18:17:28.541 [INFO][4205] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 18:17:28.587572 containerd[1559]: 2025-05-27 18:17:28.541 [INFO][4205] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.195/26] IPv6=[] ContainerID="267a6fea59bec2347481e60cb779ad41366dedbad8792364d20db65562d3d201" HandleID="k8s-pod-network.267a6fea59bec2347481e60cb779ad41366dedbad8792364d20db65562d3d201" Workload="172--237--129--174-k8s-csi--node--driver--8fsl8-eth0" May 27 18:17:28.587685 containerd[1559]: 2025-05-27 18:17:28.548 [INFO][4175] cni-plugin/k8s.go 418: Populated endpoint ContainerID="267a6fea59bec2347481e60cb779ad41366dedbad8792364d20db65562d3d201" Namespace="calico-system" Pod="csi-node-driver-8fsl8" WorkloadEndpoint="172--237--129--174-k8s-csi--node--driver--8fsl8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--129--174-k8s-csi--node--driver--8fsl8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"98c7e566-5a28-4427-8f66-2be6467e7c39", ResourceVersion:"737", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 18, 17, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-129-174", ContainerID:"", Pod:"csi-node-driver-8fsl8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.109.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3d4d7809bf8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 18:17:28.587736 containerd[1559]: 2025-05-27 18:17:28.549 [INFO][4175] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.195/32] ContainerID="267a6fea59bec2347481e60cb779ad41366dedbad8792364d20db65562d3d201" Namespace="calico-system" Pod="csi-node-driver-8fsl8" WorkloadEndpoint="172--237--129--174-k8s-csi--node--driver--8fsl8-eth0" May 27 18:17:28.587736 containerd[1559]: 2025-05-27 18:17:28.549 [INFO][4175] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3d4d7809bf8 ContainerID="267a6fea59bec2347481e60cb779ad41366dedbad8792364d20db65562d3d201" Namespace="calico-system" Pod="csi-node-driver-8fsl8" WorkloadEndpoint="172--237--129--174-k8s-csi--node--driver--8fsl8-eth0" May 27 18:17:28.587736 containerd[1559]: 2025-05-27 18:17:28.563 [INFO][4175] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="267a6fea59bec2347481e60cb779ad41366dedbad8792364d20db65562d3d201" Namespace="calico-system" Pod="csi-node-driver-8fsl8" WorkloadEndpoint="172--237--129--174-k8s-csi--node--driver--8fsl8-eth0" May 27 18:17:28.587800 containerd[1559]: 2025-05-27 18:17:28.565 [INFO][4175] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="267a6fea59bec2347481e60cb779ad41366dedbad8792364d20db65562d3d201" Namespace="calico-system" Pod="csi-node-driver-8fsl8" WorkloadEndpoint="172--237--129--174-k8s-csi--node--driver--8fsl8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--129--174-k8s-csi--node--driver--8fsl8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"98c7e566-5a28-4427-8f66-2be6467e7c39", ResourceVersion:"737", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 18, 17, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-129-174", ContainerID:"267a6fea59bec2347481e60cb779ad41366dedbad8792364d20db65562d3d201", Pod:"csi-node-driver-8fsl8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.109.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3d4d7809bf8", MAC:"4e:3d:5b:d7:f4:0b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 18:17:28.587850 containerd[1559]: 2025-05-27 18:17:28.582 [INFO][4175] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="267a6fea59bec2347481e60cb779ad41366dedbad8792364d20db65562d3d201" Namespace="calico-system" Pod="csi-node-driver-8fsl8" WorkloadEndpoint="172--237--129--174-k8s-csi--node--driver--8fsl8-eth0" May 27 18:17:28.595141 systemd[1]: Started cri-containerd-cb7274f9bb2a81db536417804159ae4b510e996ec77869d994b141dc521d866b.scope - libcontainer container cb7274f9bb2a81db536417804159ae4b510e996ec77869d994b141dc521d866b. May 27 18:17:28.617875 containerd[1559]: time="2025-05-27T18:17:28.617834531Z" level=info msg="connecting to shim 267a6fea59bec2347481e60cb779ad41366dedbad8792364d20db65562d3d201" address="unix:///run/containerd/s/88a798734ac98ac2c2b2167b6abe08ea930e413adbea1a8121aa3626b51b77c2" namespace=k8s.io protocol=ttrpc version=3 May 27 18:17:28.642087 systemd[1]: Started cri-containerd-267a6fea59bec2347481e60cb779ad41366dedbad8792364d20db65562d3d201.scope - libcontainer container 267a6fea59bec2347481e60cb779ad41366dedbad8792364d20db65562d3d201. May 27 18:17:28.667458 containerd[1559]: time="2025-05-27T18:17:28.667418452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b98f7c89d-fg2h4,Uid:86e25ba7-7f97-4a6e-b732-651388f4e9ac,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"cb7274f9bb2a81db536417804159ae4b510e996ec77869d994b141dc521d866b\"" May 27 18:17:28.670657 containerd[1559]: time="2025-05-27T18:17:28.670638586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 27 18:17:28.685503 containerd[1559]: time="2025-05-27T18:17:28.685455311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8fsl8,Uid:98c7e566-5a28-4427-8f66-2be6467e7c39,Namespace:calico-system,Attempt:0,} returns sandbox id \"267a6fea59bec2347481e60cb779ad41366dedbad8792364d20db65562d3d201\"" May 27 18:17:28.929508 kubelet[2718]: I0527 18:17:28.929127 2718 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 18:17:28.929508 kubelet[2718]: E0527 18:17:28.929498 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:17:29.349183 kubelet[2718]: E0527 18:17:29.348194 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:17:29.353233 containerd[1559]: time="2025-05-27T18:17:29.353194813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cd666d6b8-lfdk4,Uid:a7a54593-937f-4d42-a4fe-5eb5065d47aa,Namespace:calico-system,Attempt:0,}" May 27 18:17:29.364598 containerd[1559]: time="2025-05-27T18:17:29.364573077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wz2n4,Uid:0cd2286d-e934-4c13-8046-58f70aab7816,Namespace:kube-system,Attempt:0,}" May 27 18:17:29.364968 containerd[1559]: time="2025-05-27T18:17:29.364952123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b98f7c89d-tghp7,Uid:84df68e6-c45b-4638-b0fd-ffc34714916b,Namespace:calico-apiserver,Attempt:0,}" May 27 18:17:29.540686 kubelet[2718]: E0527 18:17:29.540574 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:17:29.652125 systemd-networkd[1451]: calia9ae743011a: Link UP May 27 18:17:29.652607 systemd-networkd[1451]: calia9ae743011a: Gained carrier May 27 18:17:29.682748 systemd-networkd[1451]: vxlan.calico: Link UP May 27 18:17:29.682762 systemd-networkd[1451]: vxlan.calico: Gained carrier May 27 18:17:29.699725 containerd[1559]: 2025-05-27 18:17:29.489 [INFO][4356] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--129--174-k8s-calico--kube--controllers--6cd666d6b8--lfdk4-eth0 calico-kube-controllers-6cd666d6b8- calico-system a7a54593-937f-4d42-a4fe-5eb5065d47aa 832 0 2025-05-27 18:17:10 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6cd666d6b8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-237-129-174 calico-kube-controllers-6cd666d6b8-lfdk4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia9ae743011a [] [] }} ContainerID="c04060a552ce01b57474d0104bffd1fabb5175e2bc72f85747c4473722a4526f" Namespace="calico-system" Pod="calico-kube-controllers-6cd666d6b8-lfdk4" WorkloadEndpoint="172--237--129--174-k8s-calico--kube--controllers--6cd666d6b8--lfdk4-" May 27 18:17:29.699725 containerd[1559]: 2025-05-27 18:17:29.489 [INFO][4356] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c04060a552ce01b57474d0104bffd1fabb5175e2bc72f85747c4473722a4526f" Namespace="calico-system" Pod="calico-kube-controllers-6cd666d6b8-lfdk4" WorkloadEndpoint="172--237--129--174-k8s-calico--kube--controllers--6cd666d6b8--lfdk4-eth0" May 27 18:17:29.699725 containerd[1559]: 2025-05-27 18:17:29.591 [INFO][4407] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c04060a552ce01b57474d0104bffd1fabb5175e2bc72f85747c4473722a4526f" HandleID="k8s-pod-network.c04060a552ce01b57474d0104bffd1fabb5175e2bc72f85747c4473722a4526f" Workload="172--237--129--174-k8s-calico--kube--controllers--6cd666d6b8--lfdk4-eth0" May 27 18:17:29.699938 containerd[1559]: 2025-05-27 18:17:29.592 [INFO][4407] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c04060a552ce01b57474d0104bffd1fabb5175e2bc72f85747c4473722a4526f" HandleID="k8s-pod-network.c04060a552ce01b57474d0104bffd1fabb5175e2bc72f85747c4473722a4526f" Workload="172--237--129--174-k8s-calico--kube--controllers--6cd666d6b8--lfdk4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e110), Attrs:map[string]string{"namespace":"calico-system", "node":"172-237-129-174", "pod":"calico-kube-controllers-6cd666d6b8-lfdk4", "timestamp":"2025-05-27 18:17:29.591646606 +0000 UTC"}, Hostname:"172-237-129-174", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 18:17:29.699938 containerd[1559]: 2025-05-27 18:17:29.592 [INFO][4407] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 18:17:29.699938 containerd[1559]: 2025-05-27 18:17:29.592 [INFO][4407] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 18:17:29.699938 containerd[1559]: 2025-05-27 18:17:29.592 [INFO][4407] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-129-174' May 27 18:17:29.699938 containerd[1559]: 2025-05-27 18:17:29.601 [INFO][4407] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c04060a552ce01b57474d0104bffd1fabb5175e2bc72f85747c4473722a4526f" host="172-237-129-174" May 27 18:17:29.699938 containerd[1559]: 2025-05-27 18:17:29.607 [INFO][4407] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-129-174" May 27 18:17:29.699938 containerd[1559]: 2025-05-27 18:17:29.613 [INFO][4407] ipam/ipam.go 511: Trying affinity for 192.168.109.192/26 host="172-237-129-174" May 27 18:17:29.699938 containerd[1559]: 2025-05-27 18:17:29.615 [INFO][4407] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.192/26 host="172-237-129-174" May 27 18:17:29.699938 containerd[1559]: 2025-05-27 18:17:29.620 [INFO][4407] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.192/26 host="172-237-129-174" May 27 18:17:29.700215 containerd[1559]: 2025-05-27 18:17:29.620 [INFO][4407] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.109.192/26 handle="k8s-pod-network.c04060a552ce01b57474d0104bffd1fabb5175e2bc72f85747c4473722a4526f" host="172-237-129-174" May 27 18:17:29.700215 containerd[1559]: 2025-05-27 18:17:29.622 [INFO][4407] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c04060a552ce01b57474d0104bffd1fabb5175e2bc72f85747c4473722a4526f May 27 18:17:29.700215 containerd[1559]: 2025-05-27 18:17:29.629 [INFO][4407] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.109.192/26 handle="k8s-pod-network.c04060a552ce01b57474d0104bffd1fabb5175e2bc72f85747c4473722a4526f" host="172-237-129-174" May 27 18:17:29.700215 containerd[1559]: 2025-05-27 18:17:29.637 [INFO][4407] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.109.196/26] block=192.168.109.192/26 handle="k8s-pod-network.c04060a552ce01b57474d0104bffd1fabb5175e2bc72f85747c4473722a4526f" host="172-237-129-174" May 27 18:17:29.700215 containerd[1559]: 2025-05-27 18:17:29.637 [INFO][4407] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.196/26] handle="k8s-pod-network.c04060a552ce01b57474d0104bffd1fabb5175e2bc72f85747c4473722a4526f" host="172-237-129-174" May 27 18:17:29.700215 containerd[1559]: 2025-05-27 18:17:29.639 [INFO][4407] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 18:17:29.700215 containerd[1559]: 2025-05-27 18:17:29.639 [INFO][4407] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.196/26] IPv6=[] ContainerID="c04060a552ce01b57474d0104bffd1fabb5175e2bc72f85747c4473722a4526f" HandleID="k8s-pod-network.c04060a552ce01b57474d0104bffd1fabb5175e2bc72f85747c4473722a4526f" Workload="172--237--129--174-k8s-calico--kube--controllers--6cd666d6b8--lfdk4-eth0" May 27 18:17:29.700360 containerd[1559]: 2025-05-27 18:17:29.645 [INFO][4356] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c04060a552ce01b57474d0104bffd1fabb5175e2bc72f85747c4473722a4526f" Namespace="calico-system" Pod="calico-kube-controllers-6cd666d6b8-lfdk4" WorkloadEndpoint="172--237--129--174-k8s-calico--kube--controllers--6cd666d6b8--lfdk4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--129--174-k8s-calico--kube--controllers--6cd666d6b8--lfdk4-eth0", GenerateName:"calico-kube-controllers-6cd666d6b8-", Namespace:"calico-system", SelfLink:"", UID:"a7a54593-937f-4d42-a4fe-5eb5065d47aa", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 18, 17, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6cd666d6b8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-129-174", ContainerID:"", Pod:"calico-kube-controllers-6cd666d6b8-lfdk4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.109.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia9ae743011a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 18:17:29.700415 containerd[1559]: 2025-05-27 18:17:29.645 [INFO][4356] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.196/32] ContainerID="c04060a552ce01b57474d0104bffd1fabb5175e2bc72f85747c4473722a4526f" Namespace="calico-system" Pod="calico-kube-controllers-6cd666d6b8-lfdk4" WorkloadEndpoint="172--237--129--174-k8s-calico--kube--controllers--6cd666d6b8--lfdk4-eth0" May 27 18:17:29.700415 containerd[1559]: 2025-05-27 18:17:29.645 [INFO][4356] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia9ae743011a ContainerID="c04060a552ce01b57474d0104bffd1fabb5175e2bc72f85747c4473722a4526f" Namespace="calico-system" Pod="calico-kube-controllers-6cd666d6b8-lfdk4" WorkloadEndpoint="172--237--129--174-k8s-calico--kube--controllers--6cd666d6b8--lfdk4-eth0" May 27 18:17:29.700415 containerd[1559]: 2025-05-27 18:17:29.650 [INFO][4356] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c04060a552ce01b57474d0104bffd1fabb5175e2bc72f85747c4473722a4526f" Namespace="calico-system" Pod="calico-kube-controllers-6cd666d6b8-lfdk4" WorkloadEndpoint="172--237--129--174-k8s-calico--kube--controllers--6cd666d6b8--lfdk4-eth0" May 27 18:17:29.700476 containerd[1559]: 2025-05-27 18:17:29.651 [INFO][4356] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c04060a552ce01b57474d0104bffd1fabb5175e2bc72f85747c4473722a4526f" Namespace="calico-system" Pod="calico-kube-controllers-6cd666d6b8-lfdk4" WorkloadEndpoint="172--237--129--174-k8s-calico--kube--controllers--6cd666d6b8--lfdk4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--129--174-k8s-calico--kube--controllers--6cd666d6b8--lfdk4-eth0", GenerateName:"calico-kube-controllers-6cd666d6b8-", Namespace:"calico-system", SelfLink:"", UID:"a7a54593-937f-4d42-a4fe-5eb5065d47aa", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 18, 17, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6cd666d6b8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-129-174", ContainerID:"c04060a552ce01b57474d0104bffd1fabb5175e2bc72f85747c4473722a4526f", Pod:"calico-kube-controllers-6cd666d6b8-lfdk4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.109.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia9ae743011a", MAC:"0a:f9:6e:73:7a:3a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 18:17:29.700525 containerd[1559]: 2025-05-27 18:17:29.672 [INFO][4356] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c04060a552ce01b57474d0104bffd1fabb5175e2bc72f85747c4473722a4526f" Namespace="calico-system" Pod="calico-kube-controllers-6cd666d6b8-lfdk4" WorkloadEndpoint="172--237--129--174-k8s-calico--kube--controllers--6cd666d6b8--lfdk4-eth0" May 27 18:17:29.754921 systemd-networkd[1451]: caliabb6853aea6: Link UP May 27 18:17:29.755329 systemd-networkd[1451]: caliabb6853aea6: Gained carrier May 27 18:17:29.789659 containerd[1559]: time="2025-05-27T18:17:29.787794928Z" level=info msg="connecting to shim c04060a552ce01b57474d0104bffd1fabb5175e2bc72f85747c4473722a4526f" address="unix:///run/containerd/s/770bf316f0b9a9eaa83ec8e25dcb5625ab607fa912b0af9bf7dfecf16b066c36" namespace=k8s.io protocol=ttrpc version=3 May 27 18:17:29.791188 containerd[1559]: 2025-05-27 18:17:29.505 [INFO][4383] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--129--174-k8s-calico--apiserver--6b98f7c89d--tghp7-eth0 calico-apiserver-6b98f7c89d- calico-apiserver 84df68e6-c45b-4638-b0fd-ffc34714916b 841 0 2025-05-27 18:17:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b98f7c89d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-237-129-174 calico-apiserver-6b98f7c89d-tghp7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliabb6853aea6 [] [] }} ContainerID="0940698822ba152f09770bc59178109c86cfa1d3a0715dbc20a8d771076c9f28" Namespace="calico-apiserver" Pod="calico-apiserver-6b98f7c89d-tghp7" WorkloadEndpoint="172--237--129--174-k8s-calico--apiserver--6b98f7c89d--tghp7-" May 27 18:17:29.791188 containerd[1559]: 2025-05-27 18:17:29.505 [INFO][4383] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0940698822ba152f09770bc59178109c86cfa1d3a0715dbc20a8d771076c9f28" Namespace="calico-apiserver" Pod="calico-apiserver-6b98f7c89d-tghp7" WorkloadEndpoint="172--237--129--174-k8s-calico--apiserver--6b98f7c89d--tghp7-eth0" May 27 18:17:29.791188 containerd[1559]: 2025-05-27 18:17:29.605 [INFO][4409] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0940698822ba152f09770bc59178109c86cfa1d3a0715dbc20a8d771076c9f28" HandleID="k8s-pod-network.0940698822ba152f09770bc59178109c86cfa1d3a0715dbc20a8d771076c9f28" Workload="172--237--129--174-k8s-calico--apiserver--6b98f7c89d--tghp7-eth0" May 27 18:17:29.791319 containerd[1559]: 2025-05-27 18:17:29.606 [INFO][4409] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0940698822ba152f09770bc59178109c86cfa1d3a0715dbc20a8d771076c9f28" HandleID="k8s-pod-network.0940698822ba152f09770bc59178109c86cfa1d3a0715dbc20a8d771076c9f28" Workload="172--237--129--174-k8s-calico--apiserver--6b98f7c89d--tghp7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00038fbf0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-237-129-174", "pod":"calico-apiserver-6b98f7c89d-tghp7", "timestamp":"2025-05-27 18:17:29.605833626 +0000 UTC"}, Hostname:"172-237-129-174", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 18:17:29.791319 containerd[1559]: 2025-05-27 18:17:29.606 [INFO][4409] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 18:17:29.791319 containerd[1559]: 2025-05-27 18:17:29.637 [INFO][4409] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 18:17:29.791319 containerd[1559]: 2025-05-27 18:17:29.637 [INFO][4409] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-129-174' May 27 18:17:29.791319 containerd[1559]: 2025-05-27 18:17:29.702 [INFO][4409] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0940698822ba152f09770bc59178109c86cfa1d3a0715dbc20a8d771076c9f28" host="172-237-129-174" May 27 18:17:29.791319 containerd[1559]: 2025-05-27 18:17:29.710 [INFO][4409] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-129-174" May 27 18:17:29.791319 containerd[1559]: 2025-05-27 18:17:29.716 [INFO][4409] ipam/ipam.go 511: Trying affinity for 192.168.109.192/26 host="172-237-129-174" May 27 18:17:29.791319 containerd[1559]: 2025-05-27 18:17:29.718 [INFO][4409] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.192/26 host="172-237-129-174" May 27 18:17:29.791319 containerd[1559]: 2025-05-27 18:17:29.723 [INFO][4409] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.192/26 host="172-237-129-174" May 27 18:17:29.791719 containerd[1559]: 2025-05-27 18:17:29.723 [INFO][4409] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.109.192/26 handle="k8s-pod-network.0940698822ba152f09770bc59178109c86cfa1d3a0715dbc20a8d771076c9f28" host="172-237-129-174" May 27 18:17:29.791719 containerd[1559]: 2025-05-27 18:17:29.725 [INFO][4409] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0940698822ba152f09770bc59178109c86cfa1d3a0715dbc20a8d771076c9f28 May 27 18:17:29.791719 containerd[1559]: 2025-05-27 18:17:29.728 [INFO][4409] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.109.192/26 handle="k8s-pod-network.0940698822ba152f09770bc59178109c86cfa1d3a0715dbc20a8d771076c9f28" host="172-237-129-174" May 27 18:17:29.791719 containerd[1559]: 2025-05-27 18:17:29.733 [INFO][4409] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.109.197/26] block=192.168.109.192/26 handle="k8s-pod-network.0940698822ba152f09770bc59178109c86cfa1d3a0715dbc20a8d771076c9f28" host="172-237-129-174" May 27 18:17:29.791719 containerd[1559]: 2025-05-27 18:17:29.733 [INFO][4409] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.197/26] handle="k8s-pod-network.0940698822ba152f09770bc59178109c86cfa1d3a0715dbc20a8d771076c9f28" host="172-237-129-174" May 27 18:17:29.791719 containerd[1559]: 2025-05-27 18:17:29.733 [INFO][4409] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 18:17:29.791719 containerd[1559]: 2025-05-27 18:17:29.734 [INFO][4409] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.197/26] IPv6=[] ContainerID="0940698822ba152f09770bc59178109c86cfa1d3a0715dbc20a8d771076c9f28" HandleID="k8s-pod-network.0940698822ba152f09770bc59178109c86cfa1d3a0715dbc20a8d771076c9f28" Workload="172--237--129--174-k8s-calico--apiserver--6b98f7c89d--tghp7-eth0" May 27 18:17:29.791846 containerd[1559]: 2025-05-27 18:17:29.742 [INFO][4383] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0940698822ba152f09770bc59178109c86cfa1d3a0715dbc20a8d771076c9f28" Namespace="calico-apiserver" Pod="calico-apiserver-6b98f7c89d-tghp7" WorkloadEndpoint="172--237--129--174-k8s-calico--apiserver--6b98f7c89d--tghp7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--129--174-k8s-calico--apiserver--6b98f7c89d--tghp7-eth0", GenerateName:"calico-apiserver-6b98f7c89d-", Namespace:"calico-apiserver", SelfLink:"", UID:"84df68e6-c45b-4638-b0fd-ffc34714916b", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 18, 17, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b98f7c89d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-129-174", ContainerID:"", Pod:"calico-apiserver-6b98f7c89d-tghp7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliabb6853aea6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 18:17:29.791896 containerd[1559]: 2025-05-27 18:17:29.742 [INFO][4383] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.197/32] ContainerID="0940698822ba152f09770bc59178109c86cfa1d3a0715dbc20a8d771076c9f28" Namespace="calico-apiserver" Pod="calico-apiserver-6b98f7c89d-tghp7" WorkloadEndpoint="172--237--129--174-k8s-calico--apiserver--6b98f7c89d--tghp7-eth0" May 27 18:17:29.791896 containerd[1559]: 2025-05-27 18:17:29.742 [INFO][4383] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliabb6853aea6 ContainerID="0940698822ba152f09770bc59178109c86cfa1d3a0715dbc20a8d771076c9f28" Namespace="calico-apiserver" Pod="calico-apiserver-6b98f7c89d-tghp7" WorkloadEndpoint="172--237--129--174-k8s-calico--apiserver--6b98f7c89d--tghp7-eth0" May 27 18:17:29.791896 containerd[1559]: 2025-05-27 18:17:29.762 [INFO][4383] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0940698822ba152f09770bc59178109c86cfa1d3a0715dbc20a8d771076c9f28" Namespace="calico-apiserver" Pod="calico-apiserver-6b98f7c89d-tghp7" WorkloadEndpoint="172--237--129--174-k8s-calico--apiserver--6b98f7c89d--tghp7-eth0" May 27 18:17:29.791958 containerd[1559]: 2025-05-27 18:17:29.762 [INFO][4383] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0940698822ba152f09770bc59178109c86cfa1d3a0715dbc20a8d771076c9f28" Namespace="calico-apiserver" Pod="calico-apiserver-6b98f7c89d-tghp7" WorkloadEndpoint="172--237--129--174-k8s-calico--apiserver--6b98f7c89d--tghp7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--129--174-k8s-calico--apiserver--6b98f7c89d--tghp7-eth0", GenerateName:"calico-apiserver-6b98f7c89d-", Namespace:"calico-apiserver", SelfLink:"", UID:"84df68e6-c45b-4638-b0fd-ffc34714916b", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 18, 17, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b98f7c89d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-129-174", ContainerID:"0940698822ba152f09770bc59178109c86cfa1d3a0715dbc20a8d771076c9f28", Pod:"calico-apiserver-6b98f7c89d-tghp7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliabb6853aea6", MAC:"be:72:cc:57:14:08", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 18:17:29.793090 containerd[1559]: 2025-05-27 18:17:29.785 [INFO][4383] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0940698822ba152f09770bc59178109c86cfa1d3a0715dbc20a8d771076c9f28" Namespace="calico-apiserver" Pod="calico-apiserver-6b98f7c89d-tghp7" WorkloadEndpoint="172--237--129--174-k8s-calico--apiserver--6b98f7c89d--tghp7-eth0" May 27 18:17:29.857096 containerd[1559]: time="2025-05-27T18:17:29.857048404Z" level=info msg="connecting to shim 0940698822ba152f09770bc59178109c86cfa1d3a0715dbc20a8d771076c9f28" address="unix:///run/containerd/s/bc10f568ea7318bfe3ccb998bda2b8dbe622ee155340d5ab6954aff4cd19ed5e" namespace=k8s.io protocol=ttrpc version=3 May 27 18:17:29.889161 systemd[1]: Started cri-containerd-0940698822ba152f09770bc59178109c86cfa1d3a0715dbc20a8d771076c9f28.scope - libcontainer container 0940698822ba152f09770bc59178109c86cfa1d3a0715dbc20a8d771076c9f28. May 27 18:17:29.925337 systemd[1]: Started cri-containerd-c04060a552ce01b57474d0104bffd1fabb5175e2bc72f85747c4473722a4526f.scope - libcontainer container c04060a552ce01b57474d0104bffd1fabb5175e2bc72f85747c4473722a4526f. May 27 18:17:29.933786 systemd-networkd[1451]: cali3d4d7809bf8: Gained IPv6LL May 27 18:17:29.953183 systemd-networkd[1451]: calie010a1e36a6: Link UP May 27 18:17:29.956429 systemd-networkd[1451]: calie010a1e36a6: Gained carrier May 27 18:17:30.011103 containerd[1559]: 2025-05-27 18:17:29.548 [INFO][4367] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--129--174-k8s-coredns--668d6bf9bc--wz2n4-eth0 coredns-668d6bf9bc- kube-system 0cd2286d-e934-4c13-8046-58f70aab7816 838 0 2025-05-27 18:16:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-237-129-174 coredns-668d6bf9bc-wz2n4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie010a1e36a6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f70c55a504144597657d3a9408f5b04fda750b2a965b990c27a8510ccf10d93d" Namespace="kube-system" Pod="coredns-668d6bf9bc-wz2n4" WorkloadEndpoint="172--237--129--174-k8s-coredns--668d6bf9bc--wz2n4-" May 27 18:17:30.011103 containerd[1559]: 2025-05-27 18:17:29.549 [INFO][4367] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f70c55a504144597657d3a9408f5b04fda750b2a965b990c27a8510ccf10d93d" Namespace="kube-system" Pod="coredns-668d6bf9bc-wz2n4" WorkloadEndpoint="172--237--129--174-k8s-coredns--668d6bf9bc--wz2n4-eth0" May 27 18:17:30.011103 containerd[1559]: 2025-05-27 18:17:29.630 [INFO][4424] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f70c55a504144597657d3a9408f5b04fda750b2a965b990c27a8510ccf10d93d" HandleID="k8s-pod-network.f70c55a504144597657d3a9408f5b04fda750b2a965b990c27a8510ccf10d93d" Workload="172--237--129--174-k8s-coredns--668d6bf9bc--wz2n4-eth0" May 27 18:17:30.011290 containerd[1559]: 2025-05-27 18:17:29.631 [INFO][4424] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f70c55a504144597657d3a9408f5b04fda750b2a965b990c27a8510ccf10d93d" HandleID="k8s-pod-network.f70c55a504144597657d3a9408f5b04fda750b2a965b990c27a8510ccf10d93d" Workload="172--237--129--174-k8s-coredns--668d6bf9bc--wz2n4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002350d0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-237-129-174", "pod":"coredns-668d6bf9bc-wz2n4", "timestamp":"2025-05-27 18:17:29.630384651 +0000 UTC"}, Hostname:"172-237-129-174", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 18:17:30.011290 containerd[1559]: 2025-05-27 18:17:29.631 [INFO][4424] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 18:17:30.011290 containerd[1559]: 2025-05-27 18:17:29.734 [INFO][4424] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 18:17:30.011290 containerd[1559]: 2025-05-27 18:17:29.734 [INFO][4424] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-129-174' May 27 18:17:30.011290 containerd[1559]: 2025-05-27 18:17:29.808 [INFO][4424] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f70c55a504144597657d3a9408f5b04fda750b2a965b990c27a8510ccf10d93d" host="172-237-129-174" May 27 18:17:30.011290 containerd[1559]: 2025-05-27 18:17:29.824 [INFO][4424] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-129-174" May 27 18:17:30.011290 containerd[1559]: 2025-05-27 18:17:29.851 [INFO][4424] ipam/ipam.go 511: Trying affinity for 192.168.109.192/26 host="172-237-129-174" May 27 18:17:30.011290 containerd[1559]: 2025-05-27 18:17:29.857 [INFO][4424] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.192/26 host="172-237-129-174" May 27 18:17:30.011290 containerd[1559]: 2025-05-27 18:17:29.861 [INFO][4424] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.192/26 host="172-237-129-174" May 27 18:17:30.011472 containerd[1559]: 2025-05-27 18:17:29.861 [INFO][4424] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.109.192/26 handle="k8s-pod-network.f70c55a504144597657d3a9408f5b04fda750b2a965b990c27a8510ccf10d93d" host="172-237-129-174" May 27 18:17:30.011472 containerd[1559]: 2025-05-27 18:17:29.868 [INFO][4424] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f70c55a504144597657d3a9408f5b04fda750b2a965b990c27a8510ccf10d93d May 27 18:17:30.011472 containerd[1559]: 2025-05-27 18:17:29.898 [INFO][4424] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.109.192/26 handle="k8s-pod-network.f70c55a504144597657d3a9408f5b04fda750b2a965b990c27a8510ccf10d93d" host="172-237-129-174" May 27 18:17:30.011472 containerd[1559]: 2025-05-27 18:17:29.912 [INFO][4424] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.109.198/26] block=192.168.109.192/26 handle="k8s-pod-network.f70c55a504144597657d3a9408f5b04fda750b2a965b990c27a8510ccf10d93d" host="172-237-129-174" May 27 18:17:30.011472 containerd[1559]: 2025-05-27 18:17:29.913 [INFO][4424] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.198/26] handle="k8s-pod-network.f70c55a504144597657d3a9408f5b04fda750b2a965b990c27a8510ccf10d93d" host="172-237-129-174" May 27 18:17:30.011472 containerd[1559]: 2025-05-27 18:17:29.913 [INFO][4424] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 18:17:30.011472 containerd[1559]: 2025-05-27 18:17:29.913 [INFO][4424] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.198/26] IPv6=[] ContainerID="f70c55a504144597657d3a9408f5b04fda750b2a965b990c27a8510ccf10d93d" HandleID="k8s-pod-network.f70c55a504144597657d3a9408f5b04fda750b2a965b990c27a8510ccf10d93d" Workload="172--237--129--174-k8s-coredns--668d6bf9bc--wz2n4-eth0" May 27 18:17:30.011606 containerd[1559]: 2025-05-27 18:17:29.935 [INFO][4367] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f70c55a504144597657d3a9408f5b04fda750b2a965b990c27a8510ccf10d93d" Namespace="kube-system" Pod="coredns-668d6bf9bc-wz2n4" WorkloadEndpoint="172--237--129--174-k8s-coredns--668d6bf9bc--wz2n4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--129--174-k8s-coredns--668d6bf9bc--wz2n4-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0cd2286d-e934-4c13-8046-58f70aab7816", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 18, 16, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-129-174", ContainerID:"", Pod:"coredns-668d6bf9bc-wz2n4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie010a1e36a6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 18:17:30.011606 containerd[1559]: 2025-05-27 18:17:29.935 [INFO][4367] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.198/32] ContainerID="f70c55a504144597657d3a9408f5b04fda750b2a965b990c27a8510ccf10d93d" Namespace="kube-system" Pod="coredns-668d6bf9bc-wz2n4" WorkloadEndpoint="172--237--129--174-k8s-coredns--668d6bf9bc--wz2n4-eth0" May 27 18:17:30.011606 containerd[1559]: 2025-05-27 18:17:29.935 [INFO][4367] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie010a1e36a6 ContainerID="f70c55a504144597657d3a9408f5b04fda750b2a965b990c27a8510ccf10d93d" Namespace="kube-system" Pod="coredns-668d6bf9bc-wz2n4" WorkloadEndpoint="172--237--129--174-k8s-coredns--668d6bf9bc--wz2n4-eth0" May 27 18:17:30.011606 containerd[1559]: 2025-05-27 18:17:29.962 [INFO][4367] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f70c55a504144597657d3a9408f5b04fda750b2a965b990c27a8510ccf10d93d" Namespace="kube-system" Pod="coredns-668d6bf9bc-wz2n4" WorkloadEndpoint="172--237--129--174-k8s-coredns--668d6bf9bc--wz2n4-eth0" May 27 18:17:30.011606 containerd[1559]: 2025-05-27 18:17:29.968 [INFO][4367] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f70c55a504144597657d3a9408f5b04fda750b2a965b990c27a8510ccf10d93d" Namespace="kube-system" Pod="coredns-668d6bf9bc-wz2n4" WorkloadEndpoint="172--237--129--174-k8s-coredns--668d6bf9bc--wz2n4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--129--174-k8s-coredns--668d6bf9bc--wz2n4-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0cd2286d-e934-4c13-8046-58f70aab7816", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 18, 16, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-129-174", ContainerID:"f70c55a504144597657d3a9408f5b04fda750b2a965b990c27a8510ccf10d93d", Pod:"coredns-668d6bf9bc-wz2n4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie010a1e36a6", MAC:"76:73:bc:93:f6:b4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 18:17:30.011606 containerd[1559]: 2025-05-27 18:17:29.987 [INFO][4367] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f70c55a504144597657d3a9408f5b04fda750b2a965b990c27a8510ccf10d93d" Namespace="kube-system" Pod="coredns-668d6bf9bc-wz2n4" WorkloadEndpoint="172--237--129--174-k8s-coredns--668d6bf9bc--wz2n4-eth0" May 27 18:17:30.039814 containerd[1559]: time="2025-05-27T18:17:30.039765871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cd666d6b8-lfdk4,Uid:a7a54593-937f-4d42-a4fe-5eb5065d47aa,Namespace:calico-system,Attempt:0,} returns sandbox id \"c04060a552ce01b57474d0104bffd1fabb5175e2bc72f85747c4473722a4526f\"" May 27 18:17:30.082004 containerd[1559]: time="2025-05-27T18:17:30.081694666Z" level=info msg="connecting to shim f70c55a504144597657d3a9408f5b04fda750b2a965b990c27a8510ccf10d93d" address="unix:///run/containerd/s/e93ca0a8d7fe4420b9c5de8c464dc8ed38356b7dd37e2889fb0a83c61e2d5a7a" namespace=k8s.io protocol=ttrpc version=3 May 27 18:17:30.120360 containerd[1559]: time="2025-05-27T18:17:30.119924435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b98f7c89d-tghp7,Uid:84df68e6-c45b-4638-b0fd-ffc34714916b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"0940698822ba152f09770bc59178109c86cfa1d3a0715dbc20a8d771076c9f28\"" May 27 18:17:30.140241 systemd[1]: Started cri-containerd-f70c55a504144597657d3a9408f5b04fda750b2a965b990c27a8510ccf10d93d.scope - libcontainer container f70c55a504144597657d3a9408f5b04fda750b2a965b990c27a8510ccf10d93d. May 27 18:17:30.223245 containerd[1559]: time="2025-05-27T18:17:30.223150628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wz2n4,Uid:0cd2286d-e934-4c13-8046-58f70aab7816,Namespace:kube-system,Attempt:0,} returns sandbox id \"f70c55a504144597657d3a9408f5b04fda750b2a965b990c27a8510ccf10d93d\"" May 27 18:17:30.224653 kubelet[2718]: E0527 18:17:30.224279 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:17:30.228445 containerd[1559]: time="2025-05-27T18:17:30.228377638Z" level=info msg="CreateContainer within sandbox \"f70c55a504144597657d3a9408f5b04fda750b2a965b990c27a8510ccf10d93d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 18:17:30.243345 containerd[1559]: time="2025-05-27T18:17:30.243325034Z" level=info msg="Container d5124f13486c7814fae2edbecb8b09a150b5a380380adbef18ff9ff77377242f: CDI devices from CRI Config.CDIDevices: []" May 27 18:17:30.255036 containerd[1559]: time="2025-05-27T18:17:30.254999621Z" level=info msg="CreateContainer within sandbox \"f70c55a504144597657d3a9408f5b04fda750b2a965b990c27a8510ccf10d93d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d5124f13486c7814fae2edbecb8b09a150b5a380380adbef18ff9ff77377242f\"" May 27 18:17:30.258659 containerd[1559]: time="2025-05-27T18:17:30.258573545Z" level=info msg="StartContainer for \"d5124f13486c7814fae2edbecb8b09a150b5a380380adbef18ff9ff77377242f\"" May 27 18:17:30.262438 containerd[1559]: time="2025-05-27T18:17:30.262400933Z" level=info msg="connecting to shim d5124f13486c7814fae2edbecb8b09a150b5a380380adbef18ff9ff77377242f" address="unix:///run/containerd/s/e93ca0a8d7fe4420b9c5de8c464dc8ed38356b7dd37e2889fb0a83c61e2d5a7a" protocol=ttrpc version=3 May 27 18:17:30.316246 systemd[1]: Started cri-containerd-d5124f13486c7814fae2edbecb8b09a150b5a380380adbef18ff9ff77377242f.scope - libcontainer container d5124f13486c7814fae2edbecb8b09a150b5a380380adbef18ff9ff77377242f. May 27 18:17:30.378152 systemd-networkd[1451]: cali215e0f3b282: Gained IPv6LL May 27 18:17:30.410539 containerd[1559]: time="2025-05-27T18:17:30.408703688Z" level=info msg="StartContainer for \"d5124f13486c7814fae2edbecb8b09a150b5a380380adbef18ff9ff77377242f\" returns successfully" May 27 18:17:30.548692 kubelet[2718]: E0527 18:17:30.548623 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:17:30.570882 kubelet[2718]: I0527 18:17:30.570592 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-wz2n4" podStartSLOduration=31.57057461 podStartE2EDuration="31.57057461s" podCreationTimestamp="2025-05-27 18:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 18:17:30.569873508 +0000 UTC m=+38.312417579" watchObservedRunningTime="2025-05-27 18:17:30.57057461 +0000 UTC m=+38.313118681" May 27 18:17:31.019668 systemd-networkd[1451]: calia9ae743011a: Gained IPv6LL May 27 18:17:31.071945 containerd[1559]: time="2025-05-27T18:17:31.071909176Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:17:31.072535 containerd[1559]: time="2025-05-27T18:17:31.072511024Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=47252431" May 27 18:17:31.073062 containerd[1559]: time="2025-05-27T18:17:31.073021682Z" level=info msg="ImageCreate event name:\"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:17:31.074787 containerd[1559]: time="2025-05-27T18:17:31.074560173Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:17:31.075227 containerd[1559]: time="2025-05-27T18:17:31.075203073Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 2.403896375s" May 27 18:17:31.075269 containerd[1559]: time="2025-05-27T18:17:31.075231313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 27 18:17:31.077938 containerd[1559]: time="2025-05-27T18:17:31.077907441Z" level=info msg="CreateContainer within sandbox \"cb7274f9bb2a81db536417804159ae4b510e996ec77869d994b141dc521d866b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 27 18:17:31.079137 containerd[1559]: time="2025-05-27T18:17:31.079110949Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 27 18:17:31.087959 containerd[1559]: time="2025-05-27T18:17:31.087251344Z" level=info msg="Container 679752cb5ff6167a15fa49174394229ce819a55ab0dcc2f8d8b36385b9c35e0a: CDI devices from CRI Config.CDIDevices: []" May 27 18:17:31.093474 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount643288675.mount: Deactivated successfully. May 27 18:17:31.102054 containerd[1559]: time="2025-05-27T18:17:31.102031874Z" level=info msg="CreateContainer within sandbox \"cb7274f9bb2a81db536417804159ae4b510e996ec77869d994b141dc521d866b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"679752cb5ff6167a15fa49174394229ce819a55ab0dcc2f8d8b36385b9c35e0a\"" May 27 18:17:31.102557 containerd[1559]: time="2025-05-27T18:17:31.102532491Z" level=info msg="StartContainer for \"679752cb5ff6167a15fa49174394229ce819a55ab0dcc2f8d8b36385b9c35e0a\"" May 27 18:17:31.103684 containerd[1559]: time="2025-05-27T18:17:31.103659428Z" level=info msg="connecting to shim 679752cb5ff6167a15fa49174394229ce819a55ab0dcc2f8d8b36385b9c35e0a" address="unix:///run/containerd/s/107f225cd027c2c1311bcad8ed6cfd19fd05529e6057be3d8206b0addeeeb02f" protocol=ttrpc version=3 May 27 18:17:31.128099 systemd[1]: Started cri-containerd-679752cb5ff6167a15fa49174394229ce819a55ab0dcc2f8d8b36385b9c35e0a.scope - libcontainer container 679752cb5ff6167a15fa49174394229ce819a55ab0dcc2f8d8b36385b9c35e0a. May 27 18:17:31.178551 containerd[1559]: time="2025-05-27T18:17:31.178481822Z" level=info msg="StartContainer for \"679752cb5ff6167a15fa49174394229ce819a55ab0dcc2f8d8b36385b9c35e0a\" returns successfully" May 27 18:17:31.210474 systemd-networkd[1451]: vxlan.calico: Gained IPv6LL May 27 18:17:31.274578 systemd-networkd[1451]: calie010a1e36a6: Gained IPv6LL May 27 18:17:31.346064 kubelet[2718]: E0527 18:17:31.345690 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:17:31.347572 containerd[1559]: time="2025-05-27T18:17:31.347468016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fjksh,Uid:dd5bd327-01f8-498a-b961-849e97947b28,Namespace:kube-system,Attempt:0,}" May 27 18:17:31.348353 containerd[1559]: time="2025-05-27T18:17:31.348268387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-fv6vw,Uid:3ce55071-e071-4152-8491-d9e57fa88f84,Namespace:calico-system,Attempt:0,}" May 27 18:17:31.403185 systemd-networkd[1451]: caliabb6853aea6: Gained IPv6LL May 27 18:17:31.469072 systemd-networkd[1451]: calife3aca06008: Link UP May 27 18:17:31.470157 systemd-networkd[1451]: calife3aca06008: Gained carrier May 27 18:17:31.486695 containerd[1559]: 2025-05-27 18:17:31.396 [INFO][4768] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--129--174-k8s-coredns--668d6bf9bc--fjksh-eth0 coredns-668d6bf9bc- kube-system dd5bd327-01f8-498a-b961-849e97947b28 839 0 2025-05-27 18:16:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-237-129-174 coredns-668d6bf9bc-fjksh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calife3aca06008 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e53771091adb90178a35d3d5b3f9bb2da62a41e7fcab522bb1a45df751d38d6d" Namespace="kube-system" Pod="coredns-668d6bf9bc-fjksh" WorkloadEndpoint="172--237--129--174-k8s-coredns--668d6bf9bc--fjksh-" May 27 18:17:31.486695 containerd[1559]: 2025-05-27 18:17:31.396 [INFO][4768] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e53771091adb90178a35d3d5b3f9bb2da62a41e7fcab522bb1a45df751d38d6d" Namespace="kube-system" Pod="coredns-668d6bf9bc-fjksh" WorkloadEndpoint="172--237--129--174-k8s-coredns--668d6bf9bc--fjksh-eth0" May 27 18:17:31.486695 containerd[1559]: 2025-05-27 18:17:31.427 [INFO][4793] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e53771091adb90178a35d3d5b3f9bb2da62a41e7fcab522bb1a45df751d38d6d" HandleID="k8s-pod-network.e53771091adb90178a35d3d5b3f9bb2da62a41e7fcab522bb1a45df751d38d6d" Workload="172--237--129--174-k8s-coredns--668d6bf9bc--fjksh-eth0" May 27 18:17:31.486695 containerd[1559]: 2025-05-27 18:17:31.428 [INFO][4793] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e53771091adb90178a35d3d5b3f9bb2da62a41e7fcab522bb1a45df751d38d6d" HandleID="k8s-pod-network.e53771091adb90178a35d3d5b3f9bb2da62a41e7fcab522bb1a45df751d38d6d" Workload="172--237--129--174-k8s-coredns--668d6bf9bc--fjksh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000235050), Attrs:map[string]string{"namespace":"kube-system", "node":"172-237-129-174", "pod":"coredns-668d6bf9bc-fjksh", "timestamp":"2025-05-27 18:17:31.42787822 +0000 UTC"}, Hostname:"172-237-129-174", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 18:17:31.486695 containerd[1559]: 2025-05-27 18:17:31.428 [INFO][4793] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 18:17:31.486695 containerd[1559]: 2025-05-27 18:17:31.428 [INFO][4793] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 18:17:31.486695 containerd[1559]: 2025-05-27 18:17:31.428 [INFO][4793] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-129-174' May 27 18:17:31.486695 containerd[1559]: 2025-05-27 18:17:31.435 [INFO][4793] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e53771091adb90178a35d3d5b3f9bb2da62a41e7fcab522bb1a45df751d38d6d" host="172-237-129-174" May 27 18:17:31.486695 containerd[1559]: 2025-05-27 18:17:31.439 [INFO][4793] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-129-174" May 27 18:17:31.486695 containerd[1559]: 2025-05-27 18:17:31.443 [INFO][4793] ipam/ipam.go 511: Trying affinity for 192.168.109.192/26 host="172-237-129-174" May 27 18:17:31.486695 containerd[1559]: 2025-05-27 18:17:31.444 [INFO][4793] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.192/26 host="172-237-129-174" May 27 18:17:31.486695 containerd[1559]: 2025-05-27 18:17:31.446 [INFO][4793] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.192/26 host="172-237-129-174" May 27 18:17:31.486695 containerd[1559]: 2025-05-27 18:17:31.446 [INFO][4793] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.109.192/26 handle="k8s-pod-network.e53771091adb90178a35d3d5b3f9bb2da62a41e7fcab522bb1a45df751d38d6d" host="172-237-129-174" May 27 18:17:31.486695 containerd[1559]: 2025-05-27 18:17:31.447 [INFO][4793] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e53771091adb90178a35d3d5b3f9bb2da62a41e7fcab522bb1a45df751d38d6d May 27 18:17:31.486695 containerd[1559]: 2025-05-27 18:17:31.451 [INFO][4793] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.109.192/26 handle="k8s-pod-network.e53771091adb90178a35d3d5b3f9bb2da62a41e7fcab522bb1a45df751d38d6d" host="172-237-129-174" May 27 18:17:31.486695 containerd[1559]: 2025-05-27 18:17:31.456 [INFO][4793] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.109.199/26] block=192.168.109.192/26 handle="k8s-pod-network.e53771091adb90178a35d3d5b3f9bb2da62a41e7fcab522bb1a45df751d38d6d" host="172-237-129-174" May 27 18:17:31.486695 containerd[1559]: 2025-05-27 18:17:31.456 [INFO][4793] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.199/26] handle="k8s-pod-network.e53771091adb90178a35d3d5b3f9bb2da62a41e7fcab522bb1a45df751d38d6d" host="172-237-129-174" May 27 18:17:31.486695 containerd[1559]: 2025-05-27 18:17:31.457 [INFO][4793] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 18:17:31.486695 containerd[1559]: 2025-05-27 18:17:31.457 [INFO][4793] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.199/26] IPv6=[] ContainerID="e53771091adb90178a35d3d5b3f9bb2da62a41e7fcab522bb1a45df751d38d6d" HandleID="k8s-pod-network.e53771091adb90178a35d3d5b3f9bb2da62a41e7fcab522bb1a45df751d38d6d" Workload="172--237--129--174-k8s-coredns--668d6bf9bc--fjksh-eth0" May 27 18:17:31.487342 containerd[1559]: 2025-05-27 18:17:31.460 [INFO][4768] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e53771091adb90178a35d3d5b3f9bb2da62a41e7fcab522bb1a45df751d38d6d" Namespace="kube-system" Pod="coredns-668d6bf9bc-fjksh" WorkloadEndpoint="172--237--129--174-k8s-coredns--668d6bf9bc--fjksh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--129--174-k8s-coredns--668d6bf9bc--fjksh-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"dd5bd327-01f8-498a-b961-849e97947b28", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 18, 16, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-129-174", ContainerID:"", Pod:"coredns-668d6bf9bc-fjksh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calife3aca06008", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 18:17:31.487342 containerd[1559]: 2025-05-27 18:17:31.460 [INFO][4768] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.199/32] ContainerID="e53771091adb90178a35d3d5b3f9bb2da62a41e7fcab522bb1a45df751d38d6d" Namespace="kube-system" Pod="coredns-668d6bf9bc-fjksh" WorkloadEndpoint="172--237--129--174-k8s-coredns--668d6bf9bc--fjksh-eth0" May 27 18:17:31.487342 containerd[1559]: 2025-05-27 18:17:31.460 [INFO][4768] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calife3aca06008 ContainerID="e53771091adb90178a35d3d5b3f9bb2da62a41e7fcab522bb1a45df751d38d6d" Namespace="kube-system" Pod="coredns-668d6bf9bc-fjksh" WorkloadEndpoint="172--237--129--174-k8s-coredns--668d6bf9bc--fjksh-eth0" May 27 18:17:31.487342 containerd[1559]: 2025-05-27 18:17:31.471 [INFO][4768] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e53771091adb90178a35d3d5b3f9bb2da62a41e7fcab522bb1a45df751d38d6d" Namespace="kube-system" Pod="coredns-668d6bf9bc-fjksh" WorkloadEndpoint="172--237--129--174-k8s-coredns--668d6bf9bc--fjksh-eth0" May 27 18:17:31.487342 containerd[1559]: 2025-05-27 18:17:31.471 [INFO][4768] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e53771091adb90178a35d3d5b3f9bb2da62a41e7fcab522bb1a45df751d38d6d" Namespace="kube-system" Pod="coredns-668d6bf9bc-fjksh" WorkloadEndpoint="172--237--129--174-k8s-coredns--668d6bf9bc--fjksh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--129--174-k8s-coredns--668d6bf9bc--fjksh-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"dd5bd327-01f8-498a-b961-849e97947b28", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 18, 16, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-129-174", ContainerID:"e53771091adb90178a35d3d5b3f9bb2da62a41e7fcab522bb1a45df751d38d6d", Pod:"coredns-668d6bf9bc-fjksh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calife3aca06008", MAC:"92:ca:6a:44:c2:82", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 18:17:31.487342 containerd[1559]: 2025-05-27 18:17:31.483 [INFO][4768] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e53771091adb90178a35d3d5b3f9bb2da62a41e7fcab522bb1a45df751d38d6d" Namespace="kube-system" Pod="coredns-668d6bf9bc-fjksh" WorkloadEndpoint="172--237--129--174-k8s-coredns--668d6bf9bc--fjksh-eth0" May 27 18:17:31.554419 containerd[1559]: time="2025-05-27T18:17:31.554156057Z" level=info msg="connecting to shim e53771091adb90178a35d3d5b3f9bb2da62a41e7fcab522bb1a45df751d38d6d" address="unix:///run/containerd/s/3fcff73406749419b00c39a0c42a4ece22c11ecf2456cf10404c293305ca07c3" namespace=k8s.io protocol=ttrpc version=3 May 27 18:17:31.557761 kubelet[2718]: E0527 18:17:31.557706 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:17:31.597401 systemd[1]: Started cri-containerd-e53771091adb90178a35d3d5b3f9bb2da62a41e7fcab522bb1a45df751d38d6d.scope - libcontainer container e53771091adb90178a35d3d5b3f9bb2da62a41e7fcab522bb1a45df751d38d6d. May 27 18:17:31.626198 systemd-networkd[1451]: califa06b53edc6: Link UP May 27 18:17:31.628127 systemd-networkd[1451]: califa06b53edc6: Gained carrier May 27 18:17:31.657407 kubelet[2718]: I0527 18:17:31.656930 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6b98f7c89d-fg2h4" podStartSLOduration=22.251282786 podStartE2EDuration="24.656913728s" podCreationTimestamp="2025-05-27 18:17:07 +0000 UTC" firstStartedPulling="2025-05-27 18:17:28.670369392 +0000 UTC m=+36.412913453" lastFinishedPulling="2025-05-27 18:17:31.076000324 +0000 UTC m=+38.818544395" observedRunningTime="2025-05-27 18:17:31.581694749 +0000 UTC m=+39.324238820" watchObservedRunningTime="2025-05-27 18:17:31.656913728 +0000 UTC m=+39.399457809" May 27 18:17:31.665161 containerd[1559]: 2025-05-27 18:17:31.426 [INFO][4770] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--237--129--174-k8s-goldmane--78d55f7ddc--fv6vw-eth0 goldmane-78d55f7ddc- calico-system 3ce55071-e071-4152-8491-d9e57fa88f84 840 0 2025-05-27 18:17:09 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:78d55f7ddc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 172-237-129-174 goldmane-78d55f7ddc-fv6vw eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] califa06b53edc6 [] [] }} ContainerID="472a979fe2fcd7bb9ebe1309c4243481b8b624141e0483de898dc52a8e1efad2" Namespace="calico-system" Pod="goldmane-78d55f7ddc-fv6vw" WorkloadEndpoint="172--237--129--174-k8s-goldmane--78d55f7ddc--fv6vw-" May 27 18:17:31.665161 containerd[1559]: 2025-05-27 18:17:31.426 [INFO][4770] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="472a979fe2fcd7bb9ebe1309c4243481b8b624141e0483de898dc52a8e1efad2" Namespace="calico-system" Pod="goldmane-78d55f7ddc-fv6vw" WorkloadEndpoint="172--237--129--174-k8s-goldmane--78d55f7ddc--fv6vw-eth0" May 27 18:17:31.665161 containerd[1559]: 2025-05-27 18:17:31.459 [INFO][4802] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="472a979fe2fcd7bb9ebe1309c4243481b8b624141e0483de898dc52a8e1efad2" HandleID="k8s-pod-network.472a979fe2fcd7bb9ebe1309c4243481b8b624141e0483de898dc52a8e1efad2" Workload="172--237--129--174-k8s-goldmane--78d55f7ddc--fv6vw-eth0" May 27 18:17:31.665161 containerd[1559]: 2025-05-27 18:17:31.459 [INFO][4802] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="472a979fe2fcd7bb9ebe1309c4243481b8b624141e0483de898dc52a8e1efad2" HandleID="k8s-pod-network.472a979fe2fcd7bb9ebe1309c4243481b8b624141e0483de898dc52a8e1efad2" Workload="172--237--129--174-k8s-goldmane--78d55f7ddc--fv6vw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9840), Attrs:map[string]string{"namespace":"calico-system", "node":"172-237-129-174", "pod":"goldmane-78d55f7ddc-fv6vw", "timestamp":"2025-05-27 18:17:31.459055403 +0000 UTC"}, Hostname:"172-237-129-174", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 18:17:31.665161 containerd[1559]: 2025-05-27 18:17:31.459 [INFO][4802] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 18:17:31.665161 containerd[1559]: 2025-05-27 18:17:31.459 [INFO][4802] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 18:17:31.665161 containerd[1559]: 2025-05-27 18:17:31.459 [INFO][4802] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-237-129-174' May 27 18:17:31.665161 containerd[1559]: 2025-05-27 18:17:31.538 [INFO][4802] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.472a979fe2fcd7bb9ebe1309c4243481b8b624141e0483de898dc52a8e1efad2" host="172-237-129-174" May 27 18:17:31.665161 containerd[1559]: 2025-05-27 18:17:31.548 [INFO][4802] ipam/ipam.go 394: Looking up existing affinities for host host="172-237-129-174" May 27 18:17:31.665161 containerd[1559]: 2025-05-27 18:17:31.569 [INFO][4802] ipam/ipam.go 511: Trying affinity for 192.168.109.192/26 host="172-237-129-174" May 27 18:17:31.665161 containerd[1559]: 2025-05-27 18:17:31.574 [INFO][4802] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.192/26 host="172-237-129-174" May 27 18:17:31.665161 containerd[1559]: 2025-05-27 18:17:31.582 [INFO][4802] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.192/26 host="172-237-129-174" May 27 18:17:31.665161 containerd[1559]: 2025-05-27 18:17:31.582 [INFO][4802] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.109.192/26 handle="k8s-pod-network.472a979fe2fcd7bb9ebe1309c4243481b8b624141e0483de898dc52a8e1efad2" host="172-237-129-174" May 27 18:17:31.665161 containerd[1559]: 2025-05-27 18:17:31.588 [INFO][4802] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.472a979fe2fcd7bb9ebe1309c4243481b8b624141e0483de898dc52a8e1efad2 May 27 18:17:31.665161 containerd[1559]: 2025-05-27 18:17:31.610 [INFO][4802] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.109.192/26 handle="k8s-pod-network.472a979fe2fcd7bb9ebe1309c4243481b8b624141e0483de898dc52a8e1efad2" host="172-237-129-174" May 27 18:17:31.665161 containerd[1559]: 2025-05-27 18:17:31.615 [INFO][4802] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.109.200/26] block=192.168.109.192/26 handle="k8s-pod-network.472a979fe2fcd7bb9ebe1309c4243481b8b624141e0483de898dc52a8e1efad2" host="172-237-129-174" May 27 18:17:31.665161 containerd[1559]: 2025-05-27 18:17:31.615 [INFO][4802] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.200/26] handle="k8s-pod-network.472a979fe2fcd7bb9ebe1309c4243481b8b624141e0483de898dc52a8e1efad2" host="172-237-129-174" May 27 18:17:31.665161 containerd[1559]: 2025-05-27 18:17:31.615 [INFO][4802] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 18:17:31.665161 containerd[1559]: 2025-05-27 18:17:31.615 [INFO][4802] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.200/26] IPv6=[] ContainerID="472a979fe2fcd7bb9ebe1309c4243481b8b624141e0483de898dc52a8e1efad2" HandleID="k8s-pod-network.472a979fe2fcd7bb9ebe1309c4243481b8b624141e0483de898dc52a8e1efad2" Workload="172--237--129--174-k8s-goldmane--78d55f7ddc--fv6vw-eth0" May 27 18:17:31.665634 containerd[1559]: 2025-05-27 18:17:31.620 [INFO][4770] cni-plugin/k8s.go 418: Populated endpoint ContainerID="472a979fe2fcd7bb9ebe1309c4243481b8b624141e0483de898dc52a8e1efad2" Namespace="calico-system" Pod="goldmane-78d55f7ddc-fv6vw" WorkloadEndpoint="172--237--129--174-k8s-goldmane--78d55f7ddc--fv6vw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--129--174-k8s-goldmane--78d55f7ddc--fv6vw-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"3ce55071-e071-4152-8491-d9e57fa88f84", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 18, 17, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-129-174", ContainerID:"", Pod:"goldmane-78d55f7ddc-fv6vw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.109.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califa06b53edc6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 18:17:31.665634 containerd[1559]: 2025-05-27 18:17:31.620 [INFO][4770] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.200/32] ContainerID="472a979fe2fcd7bb9ebe1309c4243481b8b624141e0483de898dc52a8e1efad2" Namespace="calico-system" Pod="goldmane-78d55f7ddc-fv6vw" WorkloadEndpoint="172--237--129--174-k8s-goldmane--78d55f7ddc--fv6vw-eth0" May 27 18:17:31.665634 containerd[1559]: 2025-05-27 18:17:31.620 [INFO][4770] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califa06b53edc6 ContainerID="472a979fe2fcd7bb9ebe1309c4243481b8b624141e0483de898dc52a8e1efad2" Namespace="calico-system" Pod="goldmane-78d55f7ddc-fv6vw" WorkloadEndpoint="172--237--129--174-k8s-goldmane--78d55f7ddc--fv6vw-eth0" May 27 18:17:31.665634 containerd[1559]: 2025-05-27 18:17:31.630 [INFO][4770] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="472a979fe2fcd7bb9ebe1309c4243481b8b624141e0483de898dc52a8e1efad2" Namespace="calico-system" Pod="goldmane-78d55f7ddc-fv6vw" WorkloadEndpoint="172--237--129--174-k8s-goldmane--78d55f7ddc--fv6vw-eth0" May 27 18:17:31.665634 containerd[1559]: 2025-05-27 18:17:31.633 [INFO][4770] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="472a979fe2fcd7bb9ebe1309c4243481b8b624141e0483de898dc52a8e1efad2" Namespace="calico-system" Pod="goldmane-78d55f7ddc-fv6vw" WorkloadEndpoint="172--237--129--174-k8s-goldmane--78d55f7ddc--fv6vw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--237--129--174-k8s-goldmane--78d55f7ddc--fv6vw-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"3ce55071-e071-4152-8491-d9e57fa88f84", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 18, 17, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-237-129-174", ContainerID:"472a979fe2fcd7bb9ebe1309c4243481b8b624141e0483de898dc52a8e1efad2", Pod:"goldmane-78d55f7ddc-fv6vw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.109.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califa06b53edc6", MAC:"c2:09:5b:28:f9:30", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 18:17:31.665634 containerd[1559]: 2025-05-27 18:17:31.660 [INFO][4770] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="472a979fe2fcd7bb9ebe1309c4243481b8b624141e0483de898dc52a8e1efad2" Namespace="calico-system" Pod="goldmane-78d55f7ddc-fv6vw" WorkloadEndpoint="172--237--129--174-k8s-goldmane--78d55f7ddc--fv6vw-eth0" May 27 18:17:31.670012 containerd[1559]: time="2025-05-27T18:17:31.669902594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fjksh,Uid:dd5bd327-01f8-498a-b961-849e97947b28,Namespace:kube-system,Attempt:0,} returns sandbox id \"e53771091adb90178a35d3d5b3f9bb2da62a41e7fcab522bb1a45df751d38d6d\"" May 27 18:17:31.671648 kubelet[2718]: E0527 18:17:31.671596 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:17:31.676630 containerd[1559]: time="2025-05-27T18:17:31.676610159Z" level=info msg="CreateContainer within sandbox \"e53771091adb90178a35d3d5b3f9bb2da62a41e7fcab522bb1a45df751d38d6d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 18:17:31.703532 containerd[1559]: time="2025-05-27T18:17:31.703510332Z" level=info msg="Container 6ef8fe9245b05190cee83e2e2dc94e5c3d093c460603ff49a2698861bee1fd5f: CDI devices from CRI Config.CDIDevices: []" May 27 18:17:31.712817 containerd[1559]: time="2025-05-27T18:17:31.712796393Z" level=info msg="CreateContainer within sandbox \"e53771091adb90178a35d3d5b3f9bb2da62a41e7fcab522bb1a45df751d38d6d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6ef8fe9245b05190cee83e2e2dc94e5c3d093c460603ff49a2698861bee1fd5f\"" May 27 18:17:31.714355 containerd[1559]: time="2025-05-27T18:17:31.714255134Z" level=info msg="StartContainer for \"6ef8fe9245b05190cee83e2e2dc94e5c3d093c460603ff49a2698861bee1fd5f\"" May 27 18:17:31.716857 containerd[1559]: time="2025-05-27T18:17:31.716489936Z" level=info msg="connecting to shim 6ef8fe9245b05190cee83e2e2dc94e5c3d093c460603ff49a2698861bee1fd5f" address="unix:///run/containerd/s/3fcff73406749419b00c39a0c42a4ece22c11ecf2456cf10404c293305ca07c3" protocol=ttrpc version=3 May 27 18:17:31.718257 containerd[1559]: time="2025-05-27T18:17:31.718024518Z" level=info msg="connecting to shim 472a979fe2fcd7bb9ebe1309c4243481b8b624141e0483de898dc52a8e1efad2" address="unix:///run/containerd/s/4963cada87d94a0afdcc382fdb443ec3cebddaf3c9269899dffce37fba5ffad6" namespace=k8s.io protocol=ttrpc version=3 May 27 18:17:31.752233 systemd[1]: Started cri-containerd-6ef8fe9245b05190cee83e2e2dc94e5c3d093c460603ff49a2698861bee1fd5f.scope - libcontainer container 6ef8fe9245b05190cee83e2e2dc94e5c3d093c460603ff49a2698861bee1fd5f. May 27 18:17:31.761267 systemd[1]: Started cri-containerd-472a979fe2fcd7bb9ebe1309c4243481b8b624141e0483de898dc52a8e1efad2.scope - libcontainer container 472a979fe2fcd7bb9ebe1309c4243481b8b624141e0483de898dc52a8e1efad2. May 27 18:17:31.808731 containerd[1559]: time="2025-05-27T18:17:31.808639597Z" level=info msg="StartContainer for \"6ef8fe9245b05190cee83e2e2dc94e5c3d093c460603ff49a2698861bee1fd5f\" returns successfully" May 27 18:17:31.894657 containerd[1559]: time="2025-05-27T18:17:31.894616970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-fv6vw,Uid:3ce55071-e071-4152-8491-d9e57fa88f84,Namespace:calico-system,Attempt:0,} returns sandbox id \"472a979fe2fcd7bb9ebe1309c4243481b8b624141e0483de898dc52a8e1efad2\"" May 27 18:17:32.563449 kubelet[2718]: E0527 18:17:32.563216 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:17:32.564156 kubelet[2718]: E0527 18:17:32.564027 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:17:32.577077 containerd[1559]: time="2025-05-27T18:17:32.577044305Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:17:32.578898 containerd[1559]: time="2025-05-27T18:17:32.578877390Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8758390" May 27 18:17:32.581828 containerd[1559]: time="2025-05-27T18:17:32.581699667Z" level=info msg="ImageCreate event name:\"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:17:32.585146 kubelet[2718]: I0527 18:17:32.585112 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-fjksh" podStartSLOduration=33.585098773 podStartE2EDuration="33.585098773s" podCreationTimestamp="2025-05-27 18:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 18:17:32.580257998 +0000 UTC m=+40.322802069" watchObservedRunningTime="2025-05-27 18:17:32.585098773 +0000 UTC m=+40.327642844" May 27 18:17:32.588996 containerd[1559]: time="2025-05-27T18:17:32.588380117Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:17:32.592537 containerd[1559]: time="2025-05-27T18:17:32.592509802Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"10251093\" in 1.513190081s" May 27 18:17:32.592586 containerd[1559]: time="2025-05-27T18:17:32.592538552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\"" May 27 18:17:32.597212 containerd[1559]: time="2025-05-27T18:17:32.597190364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 27 18:17:32.599444 containerd[1559]: time="2025-05-27T18:17:32.599361233Z" level=info msg="CreateContainer within sandbox \"267a6fea59bec2347481e60cb779ad41366dedbad8792364d20db65562d3d201\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 27 18:17:32.617006 containerd[1559]: time="2025-05-27T18:17:32.616957559Z" level=info msg="Container 1082eb68b3e70f30e7e2cfaff2f64154e10a827a5575fa81ec4754fb785797d5: CDI devices from CRI Config.CDIDevices: []" May 27 18:17:32.628001 containerd[1559]: time="2025-05-27T18:17:32.627951986Z" level=info msg="CreateContainer within sandbox \"267a6fea59bec2347481e60cb779ad41366dedbad8792364d20db65562d3d201\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1082eb68b3e70f30e7e2cfaff2f64154e10a827a5575fa81ec4754fb785797d5\"" May 27 18:17:32.628767 containerd[1559]: time="2025-05-27T18:17:32.628747286Z" level=info msg="StartContainer for \"1082eb68b3e70f30e7e2cfaff2f64154e10a827a5575fa81ec4754fb785797d5\"" May 27 18:17:32.630672 containerd[1559]: time="2025-05-27T18:17:32.630518520Z" level=info msg="connecting to shim 1082eb68b3e70f30e7e2cfaff2f64154e10a827a5575fa81ec4754fb785797d5" address="unix:///run/containerd/s/88a798734ac98ac2c2b2167b6abe08ea930e413adbea1a8121aa3626b51b77c2" protocol=ttrpc version=3 May 27 18:17:32.655130 systemd[1]: Started cri-containerd-1082eb68b3e70f30e7e2cfaff2f64154e10a827a5575fa81ec4754fb785797d5.scope - libcontainer container 1082eb68b3e70f30e7e2cfaff2f64154e10a827a5575fa81ec4754fb785797d5. May 27 18:17:32.707789 containerd[1559]: time="2025-05-27T18:17:32.707548569Z" level=info msg="StartContainer for \"1082eb68b3e70f30e7e2cfaff2f64154e10a827a5575fa81ec4754fb785797d5\" returns successfully" May 27 18:17:33.514609 systemd-networkd[1451]: calife3aca06008: Gained IPv6LL May 27 18:17:33.568458 kubelet[2718]: E0527 18:17:33.568416 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:17:33.642213 systemd-networkd[1451]: califa06b53edc6: Gained IPv6LL May 27 18:17:34.578749 kubelet[2718]: E0527 18:17:34.578564 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:17:34.740228 containerd[1559]: time="2025-05-27T18:17:34.740072777Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:17:34.742258 containerd[1559]: time="2025-05-27T18:17:34.742125051Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.0: active requests=0, bytes read=51178512" May 27 18:17:34.742661 containerd[1559]: time="2025-05-27T18:17:34.742612806Z" level=info msg="ImageCreate event name:\"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:17:34.745328 containerd[1559]: time="2025-05-27T18:17:34.745292738Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:17:34.746611 containerd[1559]: time="2025-05-27T18:17:34.746543173Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" with image id \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\", size \"52671183\" in 2.149325288s" May 27 18:17:34.746788 containerd[1559]: time="2025-05-27T18:17:34.746570563Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\"" May 27 18:17:34.752203 containerd[1559]: time="2025-05-27T18:17:34.752124299Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 27 18:17:34.774115 containerd[1559]: time="2025-05-27T18:17:34.774087098Z" level=info msg="CreateContainer within sandbox \"c04060a552ce01b57474d0104bffd1fabb5175e2bc72f85747c4473722a4526f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 27 18:17:34.789107 containerd[1559]: time="2025-05-27T18:17:34.787615649Z" level=info msg="Container 1424005c62605e1dc293344e71e2954ec63fe5dfe2296be63876dade876690e1: CDI devices from CRI Config.CDIDevices: []" May 27 18:17:34.800787 containerd[1559]: time="2025-05-27T18:17:34.800753853Z" level=info msg="CreateContainer within sandbox \"c04060a552ce01b57474d0104bffd1fabb5175e2bc72f85747c4473722a4526f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1424005c62605e1dc293344e71e2954ec63fe5dfe2296be63876dade876690e1\"" May 27 18:17:34.802916 containerd[1559]: time="2025-05-27T18:17:34.802876498Z" level=info msg="StartContainer for \"1424005c62605e1dc293344e71e2954ec63fe5dfe2296be63876dade876690e1\"" May 27 18:17:34.805468 containerd[1559]: time="2025-05-27T18:17:34.805414988Z" level=info msg="connecting to shim 1424005c62605e1dc293344e71e2954ec63fe5dfe2296be63876dade876690e1" address="unix:///run/containerd/s/770bf316f0b9a9eaa83ec8e25dcb5625ab607fa912b0af9bf7dfecf16b066c36" protocol=ttrpc version=3 May 27 18:17:34.836305 systemd[1]: Started cri-containerd-1424005c62605e1dc293344e71e2954ec63fe5dfe2296be63876dade876690e1.scope - libcontainer container 1424005c62605e1dc293344e71e2954ec63fe5dfe2296be63876dade876690e1. May 27 18:17:34.907233 containerd[1559]: time="2025-05-27T18:17:34.907130899Z" level=info msg="StartContainer for \"1424005c62605e1dc293344e71e2954ec63fe5dfe2296be63876dade876690e1\" returns successfully" May 27 18:17:34.926443 containerd[1559]: time="2025-05-27T18:17:34.926347127Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:17:34.928017 containerd[1559]: time="2025-05-27T18:17:34.927714653Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=77" May 27 18:17:34.929762 containerd[1559]: time="2025-05-27T18:17:34.929719137Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 177.572878ms" May 27 18:17:34.929762 containerd[1559]: time="2025-05-27T18:17:34.929756537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 27 18:17:34.931262 containerd[1559]: time="2025-05-27T18:17:34.931217474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 18:17:34.935172 containerd[1559]: time="2025-05-27T18:17:34.934457662Z" level=info msg="CreateContainer within sandbox \"0940698822ba152f09770bc59178109c86cfa1d3a0715dbc20a8d771076c9f28\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 27 18:17:34.943695 containerd[1559]: time="2025-05-27T18:17:34.943276566Z" level=info msg="Container a7cd8972b6bfed5ee16057bc02b93ab2a0ce8df08c3b6d786d05f26f7e1fc04d: CDI devices from CRI Config.CDIDevices: []" May 27 18:17:34.955306 containerd[1559]: time="2025-05-27T18:17:34.955245758Z" level=info msg="CreateContainer within sandbox \"0940698822ba152f09770bc59178109c86cfa1d3a0715dbc20a8d771076c9f28\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a7cd8972b6bfed5ee16057bc02b93ab2a0ce8df08c3b6d786d05f26f7e1fc04d\"" May 27 18:17:34.956948 containerd[1559]: time="2025-05-27T18:17:34.956880337Z" level=info msg="StartContainer for \"a7cd8972b6bfed5ee16057bc02b93ab2a0ce8df08c3b6d786d05f26f7e1fc04d\"" May 27 18:17:34.959164 containerd[1559]: time="2025-05-27T18:17:34.959106613Z" level=info msg="connecting to shim a7cd8972b6bfed5ee16057bc02b93ab2a0ce8df08c3b6d786d05f26f7e1fc04d" address="unix:///run/containerd/s/bc10f568ea7318bfe3ccb998bda2b8dbe622ee155340d5ab6954aff4cd19ed5e" protocol=ttrpc version=3 May 27 18:17:34.998299 systemd[1]: Started cri-containerd-a7cd8972b6bfed5ee16057bc02b93ab2a0ce8df08c3b6d786d05f26f7e1fc04d.scope - libcontainer container a7cd8972b6bfed5ee16057bc02b93ab2a0ce8df08c3b6d786d05f26f7e1fc04d. May 27 18:17:35.111234 containerd[1559]: time="2025-05-27T18:17:35.111089910Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 18:17:35.112807 containerd[1559]: time="2025-05-27T18:17:35.112721409Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 18:17:35.113074 containerd[1559]: time="2025-05-27T18:17:35.112951101Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 18:17:35.113421 kubelet[2718]: E0527 18:17:35.113358 2718 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 18:17:35.113692 kubelet[2718]: E0527 18:17:35.113634 2718 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 18:17:35.114573 kubelet[2718]: E0527 18:17:35.114295 2718 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5jzt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-fv6vw_calico-system(3ce55071-e071-4152-8491-d9e57fa88f84): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 18:17:35.114694 containerd[1559]: time="2025-05-27T18:17:35.114369617Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 27 18:17:35.116264 kubelet[2718]: E0527 18:17:35.116221 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-fv6vw" podUID="3ce55071-e071-4152-8491-d9e57fa88f84" May 27 18:17:35.153913 containerd[1559]: time="2025-05-27T18:17:35.153855226Z" level=info msg="StartContainer for \"a7cd8972b6bfed5ee16057bc02b93ab2a0ce8df08c3b6d786d05f26f7e1fc04d\" returns successfully" May 27 18:17:35.590757 kubelet[2718]: E0527 18:17:35.590679 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-fv6vw" podUID="3ce55071-e071-4152-8491-d9e57fa88f84" May 27 18:17:35.616960 kubelet[2718]: I0527 18:17:35.616817 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6b98f7c89d-tghp7" podStartSLOduration=23.809575458 podStartE2EDuration="28.616798886s" podCreationTimestamp="2025-05-27 18:17:07 +0000 UTC" firstStartedPulling="2025-05-27 18:17:30.123726853 +0000 UTC m=+37.866270924" lastFinishedPulling="2025-05-27 18:17:34.930950281 +0000 UTC m=+42.673494352" observedRunningTime="2025-05-27 18:17:35.602646949 +0000 UTC m=+43.345191020" watchObservedRunningTime="2025-05-27 18:17:35.616798886 +0000 UTC m=+43.359342957" May 27 18:17:35.617292 kubelet[2718]: I0527 18:17:35.617143 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6cd666d6b8-lfdk4" podStartSLOduration=20.910080371 podStartE2EDuration="25.61713795s" podCreationTimestamp="2025-05-27 18:17:10 +0000 UTC" firstStartedPulling="2025-05-27 18:17:30.041479217 +0000 UTC m=+37.784023288" lastFinishedPulling="2025-05-27 18:17:34.748536786 +0000 UTC m=+42.491080867" observedRunningTime="2025-05-27 18:17:35.616498833 +0000 UTC m=+43.359042904" watchObservedRunningTime="2025-05-27 18:17:35.61713795 +0000 UTC m=+43.359682021" May 27 18:17:36.591387 kubelet[2718]: I0527 18:17:36.591347 2718 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 18:17:37.191606 containerd[1559]: time="2025-05-27T18:17:37.191538234Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:17:37.192635 containerd[1559]: time="2025-05-27T18:17:37.192504793Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=14705639" May 27 18:17:37.193422 containerd[1559]: time="2025-05-27T18:17:37.193378732Z" level=info msg="ImageCreate event name:\"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:17:37.195199 containerd[1559]: time="2025-05-27T18:17:37.195159800Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 18:17:37.195626 containerd[1559]: time="2025-05-27T18:17:37.195543464Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"16198294\" in 2.081147006s" May 27 18:17:37.195626 containerd[1559]: time="2025-05-27T18:17:37.195574024Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\"" May 27 18:17:37.201184 containerd[1559]: time="2025-05-27T18:17:37.199177799Z" level=info msg="CreateContainer within sandbox \"267a6fea59bec2347481e60cb779ad41366dedbad8792364d20db65562d3d201\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 27 18:17:37.207991 containerd[1559]: time="2025-05-27T18:17:37.205711903Z" level=info msg="Container 5151aeefd1f5b1444cc0a2f9e3c973b720c0791776ff462661cd69be6ed062f7: CDI devices from CRI Config.CDIDevices: []" May 27 18:17:37.224793 containerd[1559]: time="2025-05-27T18:17:37.224760630Z" level=info msg="CreateContainer within sandbox \"267a6fea59bec2347481e60cb779ad41366dedbad8792364d20db65562d3d201\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"5151aeefd1f5b1444cc0a2f9e3c973b720c0791776ff462661cd69be6ed062f7\"" May 27 18:17:37.225202 containerd[1559]: time="2025-05-27T18:17:37.225181025Z" level=info msg="StartContainer for \"5151aeefd1f5b1444cc0a2f9e3c973b720c0791776ff462661cd69be6ed062f7\"" May 27 18:17:37.227222 containerd[1559]: time="2025-05-27T18:17:37.227201655Z" level=info msg="connecting to shim 5151aeefd1f5b1444cc0a2f9e3c973b720c0791776ff462661cd69be6ed062f7" address="unix:///run/containerd/s/88a798734ac98ac2c2b2167b6abe08ea930e413adbea1a8121aa3626b51b77c2" protocol=ttrpc version=3 May 27 18:17:37.255084 systemd[1]: Started cri-containerd-5151aeefd1f5b1444cc0a2f9e3c973b720c0791776ff462661cd69be6ed062f7.scope - libcontainer container 5151aeefd1f5b1444cc0a2f9e3c973b720c0791776ff462661cd69be6ed062f7. May 27 18:17:37.293209 containerd[1559]: time="2025-05-27T18:17:37.293146741Z" level=info msg="StartContainer for \"5151aeefd1f5b1444cc0a2f9e3c973b720c0791776ff462661cd69be6ed062f7\" returns successfully" May 27 18:17:37.346076 containerd[1559]: time="2025-05-27T18:17:37.346013631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 18:17:37.444063 containerd[1559]: time="2025-05-27T18:17:37.443745091Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 18:17:37.444958 containerd[1559]: time="2025-05-27T18:17:37.444927272Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 18:17:37.445048 containerd[1559]: time="2025-05-27T18:17:37.445006953Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 18:17:37.445211 kubelet[2718]: E0527 18:17:37.445171 2718 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 18:17:37.445211 kubelet[2718]: E0527 18:17:37.445205 2718 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 18:17:37.445344 kubelet[2718]: E0527 18:17:37.445284 2718 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e2106e42b1444a1ea1099c7150d402b7,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pl6z7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5c45797667-dfdqs_calico-system(3bec5c37-47a4-48c5-93f9-be68e4cfdd20): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 18:17:37.447162 containerd[1559]: time="2025-05-27T18:17:37.447093393Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 18:17:37.458730 kubelet[2718]: I0527 18:17:37.458670 2718 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 27 18:17:37.458730 kubelet[2718]: I0527 18:17:37.458693 2718 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 27 18:17:37.545273 containerd[1559]: time="2025-05-27T18:17:37.545208826Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 18:17:37.546585 containerd[1559]: time="2025-05-27T18:17:37.546546770Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 18:17:37.546707 containerd[1559]: time="2025-05-27T18:17:37.546614541Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 18:17:37.546781 kubelet[2718]: E0527 18:17:37.546725 2718 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 18:17:37.546856 kubelet[2718]: E0527 18:17:37.546782 2718 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 18:17:37.546907 kubelet[2718]: E0527 18:17:37.546875 2718 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pl6z7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5c45797667-dfdqs_calico-system(3bec5c37-47a4-48c5-93f9-be68e4cfdd20): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 18:17:37.548335 kubelet[2718]: E0527 18:17:37.548276 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-5c45797667-dfdqs" podUID="3bec5c37-47a4-48c5-93f9-be68e4cfdd20" May 27 18:17:44.348261 kubelet[2718]: I0527 18:17:44.348191 2718 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 18:17:44.388151 containerd[1559]: time="2025-05-27T18:17:44.388110126Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1424005c62605e1dc293344e71e2954ec63fe5dfe2296be63876dade876690e1\" id:\"7c1d4976439028058d4e5a6fbfef7c4ceb6c08b24d97f71314fabeeff77ccbd9\" pid:5160 exited_at:{seconds:1748369864 nanos:387796844}" May 27 18:17:44.403805 kubelet[2718]: I0527 18:17:44.403662 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-8fsl8" podStartSLOduration=25.893885258 podStartE2EDuration="34.403648976s" podCreationTimestamp="2025-05-27 18:17:10 +0000 UTC" firstStartedPulling="2025-05-27 18:17:28.686933156 +0000 UTC m=+36.429477227" lastFinishedPulling="2025-05-27 18:17:37.196696874 +0000 UTC m=+44.939240945" observedRunningTime="2025-05-27 18:17:37.604722071 +0000 UTC m=+45.347266142" watchObservedRunningTime="2025-05-27 18:17:44.403648976 +0000 UTC m=+52.146193047" May 27 18:17:44.444457 containerd[1559]: time="2025-05-27T18:17:44.444376118Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1424005c62605e1dc293344e71e2954ec63fe5dfe2296be63876dade876690e1\" id:\"8745ecc5d5c10bbff56da97ad4c13258fdace04655009fb7dc6193733a75434f\" pid:5181 exited_at:{seconds:1748369864 nanos:443777924}" May 27 18:17:47.348076 containerd[1559]: time="2025-05-27T18:17:47.347829472Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 18:17:47.936736 containerd[1559]: time="2025-05-27T18:17:47.936674775Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 18:17:47.938070 containerd[1559]: time="2025-05-27T18:17:47.938019651Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 18:17:47.938171 containerd[1559]: time="2025-05-27T18:17:47.938139202Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 18:17:47.938408 kubelet[2718]: E0527 18:17:47.938336 2718 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 18:17:47.938408 kubelet[2718]: E0527 18:17:47.938393 2718 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 18:17:47.939033 kubelet[2718]: E0527 18:17:47.938521 2718 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5jzt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-fv6vw_calico-system(3ce55071-e071-4152-8491-d9e57fa88f84): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 18:17:47.940374 kubelet[2718]: E0527 18:17:47.940343 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-fv6vw" podUID="3ce55071-e071-4152-8491-d9e57fa88f84" May 27 18:17:52.349493 kubelet[2718]: E0527 18:17:52.349355 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-5c45797667-dfdqs" podUID="3bec5c37-47a4-48c5-93f9-be68e4cfdd20" May 27 18:17:52.589430 containerd[1559]: time="2025-05-27T18:17:52.589359078Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7588483e82e5c24bd19223e7b68d6960aa2ddb73f52851dfe5abd2e2046acb63\" id:\"1027a8cdfbdb1b89c51f21d1e8e56e60cba6b6a650e5ea2b47f21b69409ed63e\" pid:5213 exited_at:{seconds:1748369872 nanos:588911226}" May 27 18:18:03.349833 containerd[1559]: time="2025-05-27T18:18:03.349779798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 18:18:03.351733 kubelet[2718]: E0527 18:18:03.350688 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-fv6vw" podUID="3ce55071-e071-4152-8491-d9e57fa88f84" May 27 18:18:03.450970 containerd[1559]: time="2025-05-27T18:18:03.450908634Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 18:18:03.452185 containerd[1559]: time="2025-05-27T18:18:03.452148918Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 18:18:03.452260 containerd[1559]: time="2025-05-27T18:18:03.452184328Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 18:18:03.453885 kubelet[2718]: E0527 18:18:03.453838 2718 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 18:18:03.454008 kubelet[2718]: E0527 18:18:03.453892 2718 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 18:18:03.454043 kubelet[2718]: E0527 18:18:03.454001 2718 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e2106e42b1444a1ea1099c7150d402b7,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pl6z7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5c45797667-dfdqs_calico-system(3bec5c37-47a4-48c5-93f9-be68e4cfdd20): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 18:18:03.456093 containerd[1559]: time="2025-05-27T18:18:03.456051266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 18:18:03.592856 containerd[1559]: time="2025-05-27T18:18:03.592787232Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 18:18:03.595070 containerd[1559]: time="2025-05-27T18:18:03.595032977Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 18:18:03.595316 containerd[1559]: time="2025-05-27T18:18:03.595123477Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 18:18:03.595534 kubelet[2718]: E0527 18:18:03.595472 2718 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 18:18:03.595710 kubelet[2718]: E0527 18:18:03.595638 2718 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 18:18:03.596547 kubelet[2718]: E0527 18:18:03.596494 2718 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pl6z7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5c45797667-dfdqs_calico-system(3bec5c37-47a4-48c5-93f9-be68e4cfdd20): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 18:18:03.597867 kubelet[2718]: E0527 18:18:03.597728 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-5c45797667-dfdqs" podUID="3bec5c37-47a4-48c5-93f9-be68e4cfdd20" May 27 18:18:09.345553 kubelet[2718]: E0527 18:18:09.345495 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:18:12.347087 kubelet[2718]: E0527 18:18:12.347048 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:18:13.346574 kubelet[2718]: E0527 18:18:13.346522 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:18:14.348581 kubelet[2718]: E0527 18:18:14.348511 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:18:14.351069 containerd[1559]: time="2025-05-27T18:18:14.349964674Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 18:18:14.458585 containerd[1559]: time="2025-05-27T18:18:14.458535364Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1424005c62605e1dc293344e71e2954ec63fe5dfe2296be63876dade876690e1\" id:\"a965cb62cccd81d2f08aff9debc3cb68aa3418d33d939044ed27d0d6c3964f12\" pid:5251 exited_at:{seconds:1748369894 nanos:457893235}" May 27 18:18:14.491903 containerd[1559]: time="2025-05-27T18:18:14.491840478Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 18:18:14.493200 containerd[1559]: time="2025-05-27T18:18:14.493160197Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 18:18:14.493332 containerd[1559]: time="2025-05-27T18:18:14.493180556Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 18:18:14.493556 kubelet[2718]: E0527 18:18:14.493498 2718 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 18:18:14.493618 kubelet[2718]: E0527 18:18:14.493569 2718 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 18:18:14.493833 kubelet[2718]: E0527 18:18:14.493737 2718 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5jzt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-fv6vw_calico-system(3ce55071-e071-4152-8491-d9e57fa88f84): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 18:18:14.495254 kubelet[2718]: E0527 18:18:14.495220 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-fv6vw" podUID="3ce55071-e071-4152-8491-d9e57fa88f84" May 27 18:18:16.351372 kubelet[2718]: E0527 18:18:16.351313 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-5c45797667-dfdqs" podUID="3bec5c37-47a4-48c5-93f9-be68e4cfdd20" May 27 18:18:22.585734 containerd[1559]: time="2025-05-27T18:18:22.585657679Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7588483e82e5c24bd19223e7b68d6960aa2ddb73f52851dfe5abd2e2046acb63\" id:\"f6cc05cb64afecd2a0a3a2d47048d7c89fd9423b3fc114426e8e234ca704ba0e\" pid:5274 exited_at:{seconds:1748369902 nanos:585139276}" May 27 18:18:28.351064 kubelet[2718]: E0527 18:18:28.351018 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-fv6vw" podUID="3ce55071-e071-4152-8491-d9e57fa88f84" May 27 18:18:28.352065 kubelet[2718]: E0527 18:18:28.351892 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-5c45797667-dfdqs" podUID="3bec5c37-47a4-48c5-93f9-be68e4cfdd20" May 27 18:18:31.849300 containerd[1559]: time="2025-05-27T18:18:31.849259488Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1424005c62605e1dc293344e71e2954ec63fe5dfe2296be63876dade876690e1\" id:\"8f0c399aa2003856cf10e290f0d04d0449a7d3e5a8b81092fd9e66f3831e9c17\" pid:5301 exited_at:{seconds:1748369911 nanos:848925691}" May 27 18:18:35.346125 kubelet[2718]: E0527 18:18:35.346073 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:18:40.347142 kubelet[2718]: E0527 18:18:40.346799 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-fv6vw" podUID="3ce55071-e071-4152-8491-d9e57fa88f84" May 27 18:18:43.350606 kubelet[2718]: E0527 18:18:43.350459 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-5c45797667-dfdqs" podUID="3bec5c37-47a4-48c5-93f9-be68e4cfdd20" May 27 18:18:44.437092 containerd[1559]: time="2025-05-27T18:18:44.436888474Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1424005c62605e1dc293344e71e2954ec63fe5dfe2296be63876dade876690e1\" id:\"a5716269cb26f23484713728df9fadcc3a20b23625e71b149b2973b3168ecab3\" pid:5324 exited_at:{seconds:1748369924 nanos:436474938}" May 27 18:18:48.345917 kubelet[2718]: E0527 18:18:48.345824 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:18:52.573475 containerd[1559]: time="2025-05-27T18:18:52.573333962Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7588483e82e5c24bd19223e7b68d6960aa2ddb73f52851dfe5abd2e2046acb63\" id:\"7693c9d6f32d86c286f91233b37cdfa3c6445c9f25e74a59dbdceca16123c170\" pid:5356 exited_at:{seconds:1748369932 nanos:572752495}" May 27 18:18:53.347244 kubelet[2718]: E0527 18:18:53.347106 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-fv6vw" podUID="3ce55071-e071-4152-8491-d9e57fa88f84" May 27 18:18:55.348695 containerd[1559]: time="2025-05-27T18:18:55.348145961Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 18:18:55.454196 containerd[1559]: time="2025-05-27T18:18:55.454113257Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 18:18:55.455805 containerd[1559]: time="2025-05-27T18:18:55.455645898Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 18:18:55.455805 containerd[1559]: time="2025-05-27T18:18:55.455736128Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 18:18:55.456284 kubelet[2718]: E0527 18:18:55.456204 2718 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 18:18:55.456717 kubelet[2718]: E0527 18:18:55.456336 2718 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 18:18:55.456963 kubelet[2718]: E0527 18:18:55.456464 2718 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e2106e42b1444a1ea1099c7150d402b7,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pl6z7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5c45797667-dfdqs_calico-system(3bec5c37-47a4-48c5-93f9-be68e4cfdd20): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 18:18:55.459466 containerd[1559]: time="2025-05-27T18:18:55.459421297Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 18:18:55.561356 containerd[1559]: time="2025-05-27T18:18:55.561292136Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 18:18:55.562847 containerd[1559]: time="2025-05-27T18:18:55.562755927Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 18:18:55.563187 containerd[1559]: time="2025-05-27T18:18:55.562764997Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 18:18:55.563386 kubelet[2718]: E0527 18:18:55.563332 2718 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 18:18:55.563536 kubelet[2718]: E0527 18:18:55.563404 2718 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 18:18:55.563610 kubelet[2718]: E0527 18:18:55.563574 2718 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pl6z7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5c45797667-dfdqs_calico-system(3bec5c37-47a4-48c5-93f9-be68e4cfdd20): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 18:18:55.565071 kubelet[2718]: E0527 18:18:55.565023 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-5c45797667-dfdqs" podUID="3bec5c37-47a4-48c5-93f9-be68e4cfdd20" May 27 18:18:59.346050 kubelet[2718]: E0527 18:18:59.345906 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:19:04.347261 containerd[1559]: time="2025-05-27T18:19:04.346935053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 18:19:04.450457 containerd[1559]: time="2025-05-27T18:19:04.450403493Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 18:19:04.451878 containerd[1559]: time="2025-05-27T18:19:04.451817587Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 18:19:04.452109 containerd[1559]: time="2025-05-27T18:19:04.451833697Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 18:19:04.452216 kubelet[2718]: E0527 18:19:04.452158 2718 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 18:19:04.452504 kubelet[2718]: E0527 18:19:04.452238 2718 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 18:19:04.453052 kubelet[2718]: E0527 18:19:04.452958 2718 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5jzt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-fv6vw_calico-system(3ce55071-e071-4152-8491-d9e57fa88f84): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 18:19:04.455046 kubelet[2718]: E0527 18:19:04.454753 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-fv6vw" podUID="3ce55071-e071-4152-8491-d9e57fa88f84" May 27 18:19:06.348504 kubelet[2718]: E0527 18:19:06.348155 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-5c45797667-dfdqs" podUID="3bec5c37-47a4-48c5-93f9-be68e4cfdd20" May 27 18:19:09.094028 systemd[1]: Started sshd@8-172.237.129.174:22-139.178.89.65:39020.service - OpenSSH per-connection server daemon (139.178.89.65:39020). May 27 18:19:09.465300 sshd[5394]: Accepted publickey for core from 139.178.89.65 port 39020 ssh2: RSA SHA256:ZIGvjpYhdi6+jKU6Ppm9MLMGwult3xuJcwOk2Crd0Zw May 27 18:19:09.467465 sshd-session[5394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:19:09.474025 systemd-logind[1534]: New session 8 of user core. May 27 18:19:09.476123 systemd[1]: Started session-8.scope - Session 8 of User core. May 27 18:19:09.833873 sshd[5396]: Connection closed by 139.178.89.65 port 39020 May 27 18:19:09.836209 sshd-session[5394]: pam_unix(sshd:session): session closed for user core May 27 18:19:09.843487 systemd[1]: sshd@8-172.237.129.174:22-139.178.89.65:39020.service: Deactivated successfully. May 27 18:19:09.847992 systemd[1]: session-8.scope: Deactivated successfully. May 27 18:19:09.849585 systemd-logind[1534]: Session 8 logged out. Waiting for processes to exit. May 27 18:19:09.853182 systemd-logind[1534]: Removed session 8. May 27 18:19:14.591751 containerd[1559]: time="2025-05-27T18:19:14.591696127Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1424005c62605e1dc293344e71e2954ec63fe5dfe2296be63876dade876690e1\" id:\"355a6090c9efb8c0a9be51aec71db474cd987da7b87f2d73f9658eb39ddfbd5d\" pid:5423 exited_at:{seconds:1748369954 nanos:591360729}" May 27 18:19:14.900181 systemd[1]: Started sshd@9-172.237.129.174:22-139.178.89.65:53540.service - OpenSSH per-connection server daemon (139.178.89.65:53540). May 27 18:19:15.265043 sshd[5434]: Accepted publickey for core from 139.178.89.65 port 53540 ssh2: RSA SHA256:ZIGvjpYhdi6+jKU6Ppm9MLMGwult3xuJcwOk2Crd0Zw May 27 18:19:15.266333 sshd-session[5434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:19:15.273355 systemd-logind[1534]: New session 9 of user core. May 27 18:19:15.280148 systemd[1]: Started session-9.scope - Session 9 of User core. May 27 18:19:15.585713 sshd[5436]: Connection closed by 139.178.89.65 port 53540 May 27 18:19:15.586218 sshd-session[5434]: pam_unix(sshd:session): session closed for user core May 27 18:19:15.593130 systemd[1]: sshd@9-172.237.129.174:22-139.178.89.65:53540.service: Deactivated successfully. May 27 18:19:15.596051 systemd[1]: session-9.scope: Deactivated successfully. May 27 18:19:15.598574 systemd-logind[1534]: Session 9 logged out. Waiting for processes to exit. May 27 18:19:15.601584 systemd-logind[1534]: Removed session 9. May 27 18:19:16.347211 kubelet[2718]: E0527 18:19:16.347135 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-fv6vw" podUID="3ce55071-e071-4152-8491-d9e57fa88f84" May 27 18:19:17.345591 kubelet[2718]: E0527 18:19:17.345500 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:19:18.348665 kubelet[2718]: E0527 18:19:18.348581 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-5c45797667-dfdqs" podUID="3bec5c37-47a4-48c5-93f9-be68e4cfdd20" May 27 18:19:20.658107 systemd[1]: Started sshd@10-172.237.129.174:22-139.178.89.65:53546.service - OpenSSH per-connection server daemon (139.178.89.65:53546). May 27 18:19:21.013738 sshd[5454]: Accepted publickey for core from 139.178.89.65 port 53546 ssh2: RSA SHA256:ZIGvjpYhdi6+jKU6Ppm9MLMGwult3xuJcwOk2Crd0Zw May 27 18:19:21.015426 sshd-session[5454]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:19:21.021581 systemd-logind[1534]: New session 10 of user core. May 27 18:19:21.027264 systemd[1]: Started session-10.scope - Session 10 of User core. May 27 18:19:21.320204 sshd[5456]: Connection closed by 139.178.89.65 port 53546 May 27 18:19:21.321453 sshd-session[5454]: pam_unix(sshd:session): session closed for user core May 27 18:19:21.326091 systemd[1]: sshd@10-172.237.129.174:22-139.178.89.65:53546.service: Deactivated successfully. May 27 18:19:21.329094 systemd[1]: session-10.scope: Deactivated successfully. May 27 18:19:21.330965 systemd-logind[1534]: Session 10 logged out. Waiting for processes to exit. May 27 18:19:21.332713 systemd-logind[1534]: Removed session 10. May 27 18:19:21.391338 systemd[1]: Started sshd@11-172.237.129.174:22-139.178.89.65:53556.service - OpenSSH per-connection server daemon (139.178.89.65:53556). May 27 18:19:21.757384 sshd[5469]: Accepted publickey for core from 139.178.89.65 port 53556 ssh2: RSA SHA256:ZIGvjpYhdi6+jKU6Ppm9MLMGwult3xuJcwOk2Crd0Zw May 27 18:19:21.759182 sshd-session[5469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:19:21.765704 systemd-logind[1534]: New session 11 of user core. May 27 18:19:21.770153 systemd[1]: Started session-11.scope - Session 11 of User core. May 27 18:19:22.110007 sshd[5471]: Connection closed by 139.178.89.65 port 53556 May 27 18:19:22.110727 sshd-session[5469]: pam_unix(sshd:session): session closed for user core May 27 18:19:22.115765 systemd[1]: sshd@11-172.237.129.174:22-139.178.89.65:53556.service: Deactivated successfully. May 27 18:19:22.118852 systemd[1]: session-11.scope: Deactivated successfully. May 27 18:19:22.123414 systemd-logind[1534]: Session 11 logged out. Waiting for processes to exit. May 27 18:19:22.125205 systemd-logind[1534]: Removed session 11. May 27 18:19:22.175586 systemd[1]: Started sshd@12-172.237.129.174:22-139.178.89.65:53568.service - OpenSSH per-connection server daemon (139.178.89.65:53568). May 27 18:19:22.346166 kubelet[2718]: E0527 18:19:22.345797 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:19:22.547024 sshd[5481]: Accepted publickey for core from 139.178.89.65 port 53568 ssh2: RSA SHA256:ZIGvjpYhdi6+jKU6Ppm9MLMGwult3xuJcwOk2Crd0Zw May 27 18:19:22.547048 sshd-session[5481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:19:22.558026 systemd-logind[1534]: New session 12 of user core. May 27 18:19:22.564432 systemd[1]: Started session-12.scope - Session 12 of User core. May 27 18:19:22.675854 containerd[1559]: time="2025-05-27T18:19:22.675595904Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7588483e82e5c24bd19223e7b68d6960aa2ddb73f52851dfe5abd2e2046acb63\" id:\"43ef190996374aff193ff192df79290447bc35f6651ce1a0a6c89e3c34cc7c8d\" pid:5496 exit_status:1 exited_at:{seconds:1748369962 nanos:674326968}" May 27 18:19:22.855930 sshd[5502]: Connection closed by 139.178.89.65 port 53568 May 27 18:19:22.856872 sshd-session[5481]: pam_unix(sshd:session): session closed for user core May 27 18:19:22.861423 systemd-logind[1534]: Session 12 logged out. Waiting for processes to exit. May 27 18:19:22.862156 systemd[1]: sshd@12-172.237.129.174:22-139.178.89.65:53568.service: Deactivated successfully. May 27 18:19:22.865046 systemd[1]: session-12.scope: Deactivated successfully. May 27 18:19:22.867208 systemd-logind[1534]: Removed session 12. May 27 18:19:27.926847 systemd[1]: Started sshd@13-172.237.129.174:22-139.178.89.65:51280.service - OpenSSH per-connection server daemon (139.178.89.65:51280). May 27 18:19:28.284751 sshd[5520]: Accepted publickey for core from 139.178.89.65 port 51280 ssh2: RSA SHA256:ZIGvjpYhdi6+jKU6Ppm9MLMGwult3xuJcwOk2Crd0Zw May 27 18:19:28.286582 sshd-session[5520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:19:28.292712 systemd-logind[1534]: New session 13 of user core. May 27 18:19:28.297425 systemd[1]: Started session-13.scope - Session 13 of User core. May 27 18:19:28.608790 sshd[5522]: Connection closed by 139.178.89.65 port 51280 May 27 18:19:28.609544 sshd-session[5520]: pam_unix(sshd:session): session closed for user core May 27 18:19:28.614063 systemd[1]: sshd@13-172.237.129.174:22-139.178.89.65:51280.service: Deactivated successfully. May 27 18:19:28.617357 systemd[1]: session-13.scope: Deactivated successfully. May 27 18:19:28.619084 systemd-logind[1534]: Session 13 logged out. Waiting for processes to exit. May 27 18:19:28.622308 systemd-logind[1534]: Removed session 13. May 27 18:19:28.673683 systemd[1]: Started sshd@14-172.237.129.174:22-139.178.89.65:51290.service - OpenSSH per-connection server daemon (139.178.89.65:51290). May 27 18:19:29.041689 sshd[5534]: Accepted publickey for core from 139.178.89.65 port 51290 ssh2: RSA SHA256:ZIGvjpYhdi6+jKU6Ppm9MLMGwult3xuJcwOk2Crd0Zw May 27 18:19:29.043576 sshd-session[5534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:19:29.048447 systemd-logind[1534]: New session 14 of user core. May 27 18:19:29.055085 systemd[1]: Started session-14.scope - Session 14 of User core. May 27 18:19:29.347407 kubelet[2718]: E0527 18:19:29.347279 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:19:29.349161 kubelet[2718]: E0527 18:19:29.349120 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-fv6vw" podUID="3ce55071-e071-4152-8491-d9e57fa88f84" May 27 18:19:29.505746 sshd[5536]: Connection closed by 139.178.89.65 port 51290 May 27 18:19:29.507399 sshd-session[5534]: pam_unix(sshd:session): session closed for user core May 27 18:19:29.513175 systemd-logind[1534]: Session 14 logged out. Waiting for processes to exit. May 27 18:19:29.513958 systemd[1]: sshd@14-172.237.129.174:22-139.178.89.65:51290.service: Deactivated successfully. May 27 18:19:29.516609 systemd[1]: session-14.scope: Deactivated successfully. May 27 18:19:29.518562 systemd-logind[1534]: Removed session 14. May 27 18:19:29.569921 systemd[1]: Started sshd@15-172.237.129.174:22-139.178.89.65:51302.service - OpenSSH per-connection server daemon (139.178.89.65:51302). May 27 18:19:29.933776 sshd[5546]: Accepted publickey for core from 139.178.89.65 port 51302 ssh2: RSA SHA256:ZIGvjpYhdi6+jKU6Ppm9MLMGwult3xuJcwOk2Crd0Zw May 27 18:19:29.935524 sshd-session[5546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:19:29.940660 systemd-logind[1534]: New session 15 of user core. May 27 18:19:29.947120 systemd[1]: Started session-15.scope - Session 15 of User core. May 27 18:19:30.345828 kubelet[2718]: E0527 18:19:30.345751 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:19:30.816638 sshd[5550]: Connection closed by 139.178.89.65 port 51302 May 27 18:19:30.818300 sshd-session[5546]: pam_unix(sshd:session): session closed for user core May 27 18:19:30.823429 systemd[1]: sshd@15-172.237.129.174:22-139.178.89.65:51302.service: Deactivated successfully. May 27 18:19:30.827130 systemd[1]: session-15.scope: Deactivated successfully. May 27 18:19:30.828365 systemd-logind[1534]: Session 15 logged out. Waiting for processes to exit. May 27 18:19:30.830217 systemd-logind[1534]: Removed session 15. May 27 18:19:30.881790 systemd[1]: Started sshd@16-172.237.129.174:22-139.178.89.65:51304.service - OpenSSH per-connection server daemon (139.178.89.65:51304). May 27 18:19:31.256674 sshd[5567]: Accepted publickey for core from 139.178.89.65 port 51304 ssh2: RSA SHA256:ZIGvjpYhdi6+jKU6Ppm9MLMGwult3xuJcwOk2Crd0Zw May 27 18:19:31.258947 sshd-session[5567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:19:31.264766 systemd-logind[1534]: New session 16 of user core. May 27 18:19:31.271105 systemd[1]: Started session-16.scope - Session 16 of User core. May 27 18:19:31.348168 kubelet[2718]: E0527 18:19:31.348044 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-5c45797667-dfdqs" podUID="3bec5c37-47a4-48c5-93f9-be68e4cfdd20" May 27 18:19:31.694667 sshd[5569]: Connection closed by 139.178.89.65 port 51304 May 27 18:19:31.695700 sshd-session[5567]: pam_unix(sshd:session): session closed for user core May 27 18:19:31.702081 systemd-logind[1534]: Session 16 logged out. Waiting for processes to exit. May 27 18:19:31.703118 systemd[1]: sshd@16-172.237.129.174:22-139.178.89.65:51304.service: Deactivated successfully. May 27 18:19:31.706084 systemd[1]: session-16.scope: Deactivated successfully. May 27 18:19:31.708272 systemd-logind[1534]: Removed session 16. May 27 18:19:31.763928 systemd[1]: Started sshd@17-172.237.129.174:22-139.178.89.65:51312.service - OpenSSH per-connection server daemon (139.178.89.65:51312). May 27 18:19:31.845692 containerd[1559]: time="2025-05-27T18:19:31.845647668Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1424005c62605e1dc293344e71e2954ec63fe5dfe2296be63876dade876690e1\" id:\"83b54b756718c93e22f1ad222cd7df8a4ac720da914984c34d776ced9d916bd6\" pid:5595 exited_at:{seconds:1748369971 nanos:845257369}" May 27 18:19:32.137258 sshd[5580]: Accepted publickey for core from 139.178.89.65 port 51312 ssh2: RSA SHA256:ZIGvjpYhdi6+jKU6Ppm9MLMGwult3xuJcwOk2Crd0Zw May 27 18:19:32.139352 sshd-session[5580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:19:32.146286 systemd-logind[1534]: New session 17 of user core. May 27 18:19:32.153144 systemd[1]: Started session-17.scope - Session 17 of User core. May 27 18:19:32.475939 sshd[5604]: Connection closed by 139.178.89.65 port 51312 May 27 18:19:32.477198 sshd-session[5580]: pam_unix(sshd:session): session closed for user core May 27 18:19:32.481897 systemd[1]: sshd@17-172.237.129.174:22-139.178.89.65:51312.service: Deactivated successfully. May 27 18:19:32.482491 systemd-logind[1534]: Session 17 logged out. Waiting for processes to exit. May 27 18:19:32.484846 systemd[1]: session-17.scope: Deactivated successfully. May 27 18:19:32.492462 systemd-logind[1534]: Removed session 17. May 27 18:19:37.543190 systemd[1]: Started sshd@18-172.237.129.174:22-139.178.89.65:35144.service - OpenSSH per-connection server daemon (139.178.89.65:35144). May 27 18:19:37.904198 sshd[5618]: Accepted publickey for core from 139.178.89.65 port 35144 ssh2: RSA SHA256:ZIGvjpYhdi6+jKU6Ppm9MLMGwult3xuJcwOk2Crd0Zw May 27 18:19:37.906572 sshd-session[5618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:19:37.913118 systemd-logind[1534]: New session 18 of user core. May 27 18:19:37.918114 systemd[1]: Started session-18.scope - Session 18 of User core. May 27 18:19:38.230804 sshd[5620]: Connection closed by 139.178.89.65 port 35144 May 27 18:19:38.231631 sshd-session[5618]: pam_unix(sshd:session): session closed for user core May 27 18:19:38.237574 systemd[1]: sshd@18-172.237.129.174:22-139.178.89.65:35144.service: Deactivated successfully. May 27 18:19:38.241079 systemd[1]: session-18.scope: Deactivated successfully. May 27 18:19:38.242335 systemd-logind[1534]: Session 18 logged out. Waiting for processes to exit. May 27 18:19:38.244335 systemd-logind[1534]: Removed session 18. May 27 18:19:39.019683 systemd[1]: Started sshd@19-172.237.129.174:22-154.127.42.2:53800.service - OpenSSH per-connection server daemon (154.127.42.2:53800). May 27 18:19:41.228891 sshd[5632]: Invalid user httpd from 154.127.42.2 port 53800 May 27 18:19:41.345633 kubelet[2718]: E0527 18:19:41.345073 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 27 18:19:41.640463 sshd-session[5634]: pam_faillock(sshd:auth): User unknown May 27 18:19:41.645546 sshd[5632]: Postponed keyboard-interactive for invalid user httpd from 154.127.42.2 port 53800 ssh2 [preauth] May 27 18:19:42.027868 sshd-session[5634]: pam_unix(sshd:auth): check pass; user unknown May 27 18:19:42.027904 sshd-session[5634]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=154.127.42.2 May 27 18:19:42.028509 sshd-session[5634]: pam_faillock(sshd:auth): User unknown May 27 18:19:42.350584 kubelet[2718]: E0527 18:19:42.350465 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-5c45797667-dfdqs" podUID="3bec5c37-47a4-48c5-93f9-be68e4cfdd20" May 27 18:19:43.298128 systemd[1]: Started sshd@20-172.237.129.174:22-139.178.89.65:59456.service - OpenSSH per-connection server daemon (139.178.89.65:59456). May 27 18:19:43.660129 sshd[5636]: Accepted publickey for core from 139.178.89.65 port 59456 ssh2: RSA SHA256:ZIGvjpYhdi6+jKU6Ppm9MLMGwult3xuJcwOk2Crd0Zw May 27 18:19:43.661881 sshd-session[5636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:19:43.667336 systemd-logind[1534]: New session 19 of user core. May 27 18:19:43.675096 systemd[1]: Started session-19.scope - Session 19 of User core. May 27 18:19:43.965743 sshd[5638]: Connection closed by 139.178.89.65 port 59456 May 27 18:19:43.966490 sshd-session[5636]: pam_unix(sshd:session): session closed for user core May 27 18:19:43.967642 sshd[5632]: PAM: Permission denied for illegal user httpd from 154.127.42.2 May 27 18:19:43.967642 sshd[5632]: Failed keyboard-interactive/pam for invalid user httpd from 154.127.42.2 port 53800 ssh2 May 27 18:19:43.970762 systemd-logind[1534]: Session 19 logged out. Waiting for processes to exit. May 27 18:19:43.970892 systemd[1]: sshd@20-172.237.129.174:22-139.178.89.65:59456.service: Deactivated successfully. May 27 18:19:43.972617 systemd[1]: session-19.scope: Deactivated successfully. May 27 18:19:43.974131 systemd-logind[1534]: Removed session 19. May 27 18:19:44.347112 kubelet[2718]: E0527 18:19:44.346288 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-fv6vw" podUID="3ce55071-e071-4152-8491-d9e57fa88f84" May 27 18:19:44.449098 containerd[1559]: time="2025-05-27T18:19:44.449037045Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1424005c62605e1dc293344e71e2954ec63fe5dfe2296be63876dade876690e1\" id:\"f64bc6f4af4b6bf1ac8a3e8f0440e8ad97668e8b31522288e816c77f09f949f6\" pid:5662 exited_at:{seconds:1748369984 nanos:448831086}" May 27 18:19:44.455319 sshd[5632]: Connection closed by invalid user httpd 154.127.42.2 port 53800 [preauth] May 27 18:19:44.458326 systemd[1]: sshd@19-172.237.129.174:22-154.127.42.2:53800.service: Deactivated successfully. May 27 18:19:48.877151 systemd[1]: Started sshd@21-172.237.129.174:22-170.64.209.182:35184.service - OpenSSH per-connection server daemon (170.64.209.182:35184). May 27 18:19:49.037048 systemd[1]: Started sshd@22-172.237.129.174:22-139.178.89.65:59458.service - OpenSSH per-connection server daemon (139.178.89.65:59458). May 27 18:19:49.401868 sshd[5677]: Accepted publickey for core from 139.178.89.65 port 59458 ssh2: RSA SHA256:ZIGvjpYhdi6+jKU6Ppm9MLMGwult3xuJcwOk2Crd0Zw May 27 18:19:49.403456 sshd-session[5677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:19:49.411013 systemd-logind[1534]: New session 20 of user core. May 27 18:19:49.415260 systemd[1]: Started session-20.scope - Session 20 of User core. May 27 18:19:49.657635 sshd[5674]: Invalid user from 170.64.209.182 port 35184 May 27 18:19:49.718222 sshd[5679]: Connection closed by 139.178.89.65 port 59458 May 27 18:19:49.718932 sshd-session[5677]: pam_unix(sshd:session): session closed for user core May 27 18:19:49.723371 systemd[1]: sshd@22-172.237.129.174:22-139.178.89.65:59458.service: Deactivated successfully. May 27 18:19:49.725728 systemd[1]: session-20.scope: Deactivated successfully. May 27 18:19:49.728485 systemd-logind[1534]: Session 20 logged out. Waiting for processes to exit. May 27 18:19:49.729782 systemd-logind[1534]: Removed session 20. May 27 18:19:52.568646 containerd[1559]: time="2025-05-27T18:19:52.568608846Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7588483e82e5c24bd19223e7b68d6960aa2ddb73f52851dfe5abd2e2046acb63\" id:\"d6c01d7c49a5b835a45a7d1b407c1a6781d884619dcc957807eedd795a738f9f\" pid:5705 exited_at:{seconds:1748369992 nanos:568290867}" May 27 18:19:53.349110 kubelet[2718]: E0527 18:19:53.349046 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-5c45797667-dfdqs" podUID="3bec5c37-47a4-48c5-93f9-be68e4cfdd20" May 27 18:19:54.788205 systemd[1]: Started sshd@23-172.237.129.174:22-139.178.89.65:42412.service - OpenSSH per-connection server daemon (139.178.89.65:42412). May 27 18:19:55.157145 sshd[5718]: Accepted publickey for core from 139.178.89.65 port 42412 ssh2: RSA SHA256:ZIGvjpYhdi6+jKU6Ppm9MLMGwult3xuJcwOk2Crd0Zw May 27 18:19:55.159610 sshd-session[5718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:19:55.165946 systemd-logind[1534]: New session 21 of user core. May 27 18:19:55.172026 systemd[1]: Started session-21.scope - Session 21 of User core. May 27 18:19:55.476187 sshd[5720]: Connection closed by 139.178.89.65 port 42412 May 27 18:19:55.475969 sshd-session[5718]: pam_unix(sshd:session): session closed for user core May 27 18:19:55.483715 systemd[1]: sshd@23-172.237.129.174:22-139.178.89.65:42412.service: Deactivated successfully. May 27 18:19:55.484233 systemd-logind[1534]: Session 21 logged out. Waiting for processes to exit. May 27 18:19:55.486516 systemd[1]: session-21.scope: Deactivated successfully. May 27 18:19:55.490994 systemd-logind[1534]: Removed session 21. May 27 18:19:56.865114 sshd[5674]: Connection closed by invalid user 170.64.209.182 port 35184 [preauth] May 27 18:19:56.867244 systemd[1]: sshd@21-172.237.129.174:22-170.64.209.182:35184.service: Deactivated successfully. May 27 18:19:59.346604 kubelet[2718]: E0527 18:19:59.346547 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-fv6vw" podUID="3ce55071-e071-4152-8491-d9e57fa88f84" May 27 18:20:00.542600 systemd[1]: Started sshd@24-172.237.129.174:22-139.178.89.65:42420.service - OpenSSH per-connection server daemon (139.178.89.65:42420). May 27 18:20:00.900176 sshd[5736]: Accepted publickey for core from 139.178.89.65 port 42420 ssh2: RSA SHA256:ZIGvjpYhdi6+jKU6Ppm9MLMGwult3xuJcwOk2Crd0Zw May 27 18:20:00.901967 sshd-session[5736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:20:00.907117 systemd-logind[1534]: New session 22 of user core. May 27 18:20:00.912096 systemd[1]: Started session-22.scope - Session 22 of User core. May 27 18:20:01.216838 sshd[5738]: Connection closed by 139.178.89.65 port 42420 May 27 18:20:01.217663 sshd-session[5736]: pam_unix(sshd:session): session closed for user core May 27 18:20:01.221871 systemd-logind[1534]: Session 22 logged out. Waiting for processes to exit. May 27 18:20:01.222525 systemd[1]: sshd@24-172.237.129.174:22-139.178.89.65:42420.service: Deactivated successfully. May 27 18:20:01.224837 systemd[1]: session-22.scope: Deactivated successfully. May 27 18:20:01.226771 systemd-logind[1534]: Removed session 22. May 27 18:20:04.348545 kubelet[2718]: E0527 18:20:04.348435 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-5c45797667-dfdqs" podUID="3bec5c37-47a4-48c5-93f9-be68e4cfdd20" May 27 18:20:06.282246 systemd[1]: Started sshd@25-172.237.129.174:22-139.178.89.65:39200.service - OpenSSH per-connection server daemon (139.178.89.65:39200). May 27 18:20:06.641763 sshd[5750]: Accepted publickey for core from 139.178.89.65 port 39200 ssh2: RSA SHA256:ZIGvjpYhdi6+jKU6Ppm9MLMGwult3xuJcwOk2Crd0Zw May 27 18:20:06.643759 sshd-session[5750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 18:20:06.650245 systemd-logind[1534]: New session 23 of user core. May 27 18:20:06.658162 systemd[1]: Started session-23.scope - Session 23 of User core. May 27 18:20:06.952079 sshd[5752]: Connection closed by 139.178.89.65 port 39200 May 27 18:20:06.953003 sshd-session[5750]: pam_unix(sshd:session): session closed for user core May 27 18:20:06.958201 systemd[1]: sshd@25-172.237.129.174:22-139.178.89.65:39200.service: Deactivated successfully. May 27 18:20:06.961768 systemd[1]: session-23.scope: Deactivated successfully. May 27 18:20:06.963279 systemd-logind[1534]: Session 23 logged out. Waiting for processes to exit. May 27 18:20:06.964769 systemd-logind[1534]: Removed session 23. May 27 18:20:11.346770 kubelet[2718]: E0527 18:20:11.346703 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-fv6vw" podUID="3ce55071-e071-4152-8491-d9e57fa88f84"