Oct 30 00:00:47.890372 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Oct 29 22:07:32 -00 2025 Oct 30 00:00:47.890400 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=e5fe4ef982f4bbc75df9f63e805c4ec086c6d95878919f55fe8c638c4d2b3b13 Oct 30 00:00:47.890418 kernel: BIOS-provided physical RAM map: Oct 30 00:00:47.890430 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 30 00:00:47.890439 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 30 00:00:47.890446 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 30 00:00:47.890455 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Oct 30 00:00:47.890469 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Oct 30 00:00:47.890477 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 30 00:00:47.890484 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 30 00:00:47.890494 kernel: NX (Execute Disable) protection: active Oct 30 00:00:47.890515 kernel: APIC: Static calls initialized Oct 30 00:00:47.890522 kernel: SMBIOS 2.8 present. Oct 30 00:00:47.890541 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Oct 30 00:00:47.890550 kernel: DMI: Memory slots populated: 1/1 Oct 30 00:00:47.890558 kernel: Hypervisor detected: KVM Oct 30 00:00:47.890573 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Oct 30 00:00:47.890581 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 30 00:00:47.890589 kernel: kvm-clock: using sched offset of 5789962084 cycles Oct 30 00:00:47.890598 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 30 00:00:47.890606 kernel: tsc: Detected 2494.140 MHz processor Oct 30 00:00:47.890615 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 30 00:00:47.890629 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 30 00:00:47.890640 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Oct 30 00:00:47.890652 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 30 00:00:47.890669 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 30 00:00:47.890679 kernel: ACPI: Early table checksum verification disabled Oct 30 00:00:47.890690 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Oct 30 00:00:47.890701 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 00:00:47.890713 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 00:00:47.890726 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 00:00:47.890734 kernel: ACPI: FACS 0x000000007FFE0000 000040 Oct 30 00:00:47.890743 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 00:00:47.890751 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 00:00:47.890763 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 00:00:47.890771 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 00:00:47.890779 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Oct 30 00:00:47.890787 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Oct 30 00:00:47.890795 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Oct 30 00:00:47.890803 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Oct 30 00:00:47.890815 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Oct 30 00:00:47.890827 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Oct 30 00:00:47.890841 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Oct 30 00:00:47.890855 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Oct 30 00:00:47.890867 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Oct 30 00:00:47.890879 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] Oct 30 00:00:47.890890 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] Oct 30 00:00:47.890901 kernel: Zone ranges: Oct 30 00:00:47.890918 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 30 00:00:47.890931 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Oct 30 00:00:47.890944 kernel: Normal empty Oct 30 00:00:47.890952 kernel: Device empty Oct 30 00:00:47.890961 kernel: Movable zone start for each node Oct 30 00:00:47.890969 kernel: Early memory node ranges Oct 30 00:00:47.890978 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 30 00:00:47.890987 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Oct 30 00:00:47.891001 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Oct 30 00:00:47.891013 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 30 00:00:47.891030 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 30 00:00:47.891043 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Oct 30 00:00:47.891057 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 30 00:00:47.891076 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 30 00:00:47.891089 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 30 00:00:47.891104 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 30 00:00:47.891116 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 30 00:00:47.891128 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 30 00:00:47.891142 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 30 00:00:47.891158 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 30 00:00:47.891170 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 30 00:00:47.891182 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 30 00:00:47.891195 kernel: TSC deadline timer available Oct 30 00:00:47.891206 kernel: CPU topo: Max. logical packages: 1 Oct 30 00:00:47.891218 kernel: CPU topo: Max. logical dies: 1 Oct 30 00:00:47.891230 kernel: CPU topo: Max. dies per package: 1 Oct 30 00:00:47.891242 kernel: CPU topo: Max. threads per core: 1 Oct 30 00:00:47.891256 kernel: CPU topo: Num. cores per package: 2 Oct 30 00:00:47.891272 kernel: CPU topo: Num. threads per package: 2 Oct 30 00:00:47.891285 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Oct 30 00:00:47.891300 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 30 00:00:47.891316 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Oct 30 00:00:47.891327 kernel: Booting paravirtualized kernel on KVM Oct 30 00:00:47.891339 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 30 00:00:47.891351 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Oct 30 00:00:47.891363 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Oct 30 00:00:47.891377 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Oct 30 00:00:47.891394 kernel: pcpu-alloc: [0] 0 1 Oct 30 00:00:47.891406 kernel: kvm-guest: PV spinlocks disabled, no host support Oct 30 00:00:47.891421 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=e5fe4ef982f4bbc75df9f63e805c4ec086c6d95878919f55fe8c638c4d2b3b13 Oct 30 00:00:47.891436 kernel: random: crng init done Oct 30 00:00:47.891450 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 30 00:00:47.891466 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 30 00:00:47.891480 kernel: Fallback order for Node 0: 0 Oct 30 00:00:47.891494 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 Oct 30 00:00:47.891538 kernel: Policy zone: DMA32 Oct 30 00:00:47.891554 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 30 00:00:47.891570 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 30 00:00:47.891585 kernel: Kernel/User page tables isolation: enabled Oct 30 00:00:47.891599 kernel: ftrace: allocating 40021 entries in 157 pages Oct 30 00:00:47.891614 kernel: ftrace: allocated 157 pages with 5 groups Oct 30 00:00:47.891631 kernel: Dynamic Preempt: voluntary Oct 30 00:00:47.891646 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 30 00:00:47.891660 kernel: rcu: RCU event tracing is enabled. Oct 30 00:00:47.891678 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 30 00:00:47.891691 kernel: Trampoline variant of Tasks RCU enabled. Oct 30 00:00:47.891705 kernel: Rude variant of Tasks RCU enabled. Oct 30 00:00:47.891733 kernel: Tracing variant of Tasks RCU enabled. Oct 30 00:00:47.891745 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 30 00:00:47.891758 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 30 00:00:47.891771 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Oct 30 00:00:47.891790 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Oct 30 00:00:47.891804 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Oct 30 00:00:47.891824 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Oct 30 00:00:47.891838 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 30 00:00:47.891851 kernel: Console: colour VGA+ 80x25 Oct 30 00:00:47.891863 kernel: printk: legacy console [tty0] enabled Oct 30 00:00:47.891877 kernel: printk: legacy console [ttyS0] enabled Oct 30 00:00:47.891891 kernel: ACPI: Core revision 20240827 Oct 30 00:00:47.891905 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 30 00:00:47.891935 kernel: APIC: Switch to symmetric I/O mode setup Oct 30 00:00:47.891950 kernel: x2apic enabled Oct 30 00:00:47.891963 kernel: APIC: Switched APIC routing to: physical x2apic Oct 30 00:00:47.891978 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 30 00:00:47.891993 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Oct 30 00:00:47.892020 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Oct 30 00:00:47.892038 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Oct 30 00:00:47.892054 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Oct 30 00:00:47.892070 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 30 00:00:47.892085 kernel: Spectre V2 : Mitigation: Retpolines Oct 30 00:00:47.892107 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 30 00:00:47.892124 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Oct 30 00:00:47.892140 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 30 00:00:47.892156 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 30 00:00:47.892172 kernel: MDS: Mitigation: Clear CPU buffers Oct 30 00:00:47.892188 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Oct 30 00:00:47.892204 kernel: active return thunk: its_return_thunk Oct 30 00:00:47.892220 kernel: ITS: Mitigation: Aligned branch/return thunks Oct 30 00:00:47.892236 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 30 00:00:47.892257 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 30 00:00:47.892274 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 30 00:00:47.892290 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 30 00:00:47.892308 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Oct 30 00:00:47.892325 kernel: Freeing SMP alternatives memory: 32K Oct 30 00:00:47.892340 kernel: pid_max: default: 32768 minimum: 301 Oct 30 00:00:47.892353 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 30 00:00:47.892367 kernel: landlock: Up and running. Oct 30 00:00:47.892386 kernel: SELinux: Initializing. Oct 30 00:00:47.892402 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 30 00:00:47.892416 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 30 00:00:47.892430 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Oct 30 00:00:47.892445 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Oct 30 00:00:47.892461 kernel: signal: max sigframe size: 1776 Oct 30 00:00:47.892478 kernel: rcu: Hierarchical SRCU implementation. Oct 30 00:00:47.892493 kernel: rcu: Max phase no-delay instances is 400. Oct 30 00:00:47.892539 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 30 00:00:47.892561 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Oct 30 00:00:47.892579 kernel: smp: Bringing up secondary CPUs ... Oct 30 00:00:47.892601 kernel: smpboot: x86: Booting SMP configuration: Oct 30 00:00:47.892616 kernel: .... node #0, CPUs: #1 Oct 30 00:00:47.892632 kernel: smp: Brought up 1 node, 2 CPUs Oct 30 00:00:47.892647 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Oct 30 00:00:47.892665 kernel: Memory: 1960764K/2096612K available (14336K kernel code, 2436K rwdata, 26048K rodata, 45544K init, 1184K bss, 131284K reserved, 0K cma-reserved) Oct 30 00:00:47.892683 kernel: devtmpfs: initialized Oct 30 00:00:47.892699 kernel: x86/mm: Memory block size: 128MB Oct 30 00:00:47.892720 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 30 00:00:47.892736 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 30 00:00:47.892752 kernel: pinctrl core: initialized pinctrl subsystem Oct 30 00:00:47.892768 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 30 00:00:47.892784 kernel: audit: initializing netlink subsys (disabled) Oct 30 00:00:47.892797 kernel: audit: type=2000 audit(1761782445.104:1): state=initialized audit_enabled=0 res=1 Oct 30 00:00:47.892812 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 30 00:00:47.892824 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 30 00:00:47.892839 kernel: cpuidle: using governor menu Oct 30 00:00:47.892861 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 30 00:00:47.892876 kernel: dca service started, version 1.12.1 Oct 30 00:00:47.892891 kernel: PCI: Using configuration type 1 for base access Oct 30 00:00:47.892905 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 30 00:00:47.892919 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 30 00:00:47.892934 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 30 00:00:47.892949 kernel: ACPI: Added _OSI(Module Device) Oct 30 00:00:47.892962 kernel: ACPI: Added _OSI(Processor Device) Oct 30 00:00:47.892974 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 30 00:00:47.892992 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 30 00:00:47.893005 kernel: ACPI: Interpreter enabled Oct 30 00:00:47.893019 kernel: ACPI: PM: (supports S0 S5) Oct 30 00:00:47.893034 kernel: ACPI: Using IOAPIC for interrupt routing Oct 30 00:00:47.893049 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 30 00:00:47.893060 kernel: PCI: Using E820 reservations for host bridge windows Oct 30 00:00:47.893069 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Oct 30 00:00:47.893079 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 30 00:00:47.893350 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Oct 30 00:00:47.893462 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Oct 30 00:00:47.893576 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Oct 30 00:00:47.893590 kernel: acpiphp: Slot [3] registered Oct 30 00:00:47.893599 kernel: acpiphp: Slot [4] registered Oct 30 00:00:47.893608 kernel: acpiphp: Slot [5] registered Oct 30 00:00:47.893617 kernel: acpiphp: Slot [6] registered Oct 30 00:00:47.893626 kernel: acpiphp: Slot [7] registered Oct 30 00:00:47.893635 kernel: acpiphp: Slot [8] registered Oct 30 00:00:47.893648 kernel: acpiphp: Slot [9] registered Oct 30 00:00:47.893658 kernel: acpiphp: Slot [10] registered Oct 30 00:00:47.893667 kernel: acpiphp: Slot [11] registered Oct 30 00:00:47.893676 kernel: acpiphp: Slot [12] registered Oct 30 00:00:47.893685 kernel: acpiphp: Slot [13] registered Oct 30 00:00:47.893694 kernel: acpiphp: Slot [14] registered Oct 30 00:00:47.893703 kernel: acpiphp: Slot [15] registered Oct 30 00:00:47.893712 kernel: acpiphp: Slot [16] registered Oct 30 00:00:47.893721 kernel: acpiphp: Slot [17] registered Oct 30 00:00:47.893737 kernel: acpiphp: Slot [18] registered Oct 30 00:00:47.893750 kernel: acpiphp: Slot [19] registered Oct 30 00:00:47.893763 kernel: acpiphp: Slot [20] registered Oct 30 00:00:47.893777 kernel: acpiphp: Slot [21] registered Oct 30 00:00:47.893787 kernel: acpiphp: Slot [22] registered Oct 30 00:00:47.893796 kernel: acpiphp: Slot [23] registered Oct 30 00:00:47.893805 kernel: acpiphp: Slot [24] registered Oct 30 00:00:47.893815 kernel: acpiphp: Slot [25] registered Oct 30 00:00:47.893824 kernel: acpiphp: Slot [26] registered Oct 30 00:00:47.893833 kernel: acpiphp: Slot [27] registered Oct 30 00:00:47.893845 kernel: acpiphp: Slot [28] registered Oct 30 00:00:47.893854 kernel: acpiphp: Slot [29] registered Oct 30 00:00:47.893863 kernel: acpiphp: Slot [30] registered Oct 30 00:00:47.893872 kernel: acpiphp: Slot [31] registered Oct 30 00:00:47.893882 kernel: PCI host bridge to bus 0000:00 Oct 30 00:00:47.893994 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 30 00:00:47.894083 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 30 00:00:47.894170 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 30 00:00:47.894333 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Oct 30 00:00:47.894450 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Oct 30 00:00:47.894604 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 30 00:00:47.894761 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Oct 30 00:00:47.894921 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Oct 30 00:00:47.895059 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Oct 30 00:00:47.895182 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] Oct 30 00:00:47.895283 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Oct 30 00:00:47.895392 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Oct 30 00:00:47.895487 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Oct 30 00:00:47.895609 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Oct 30 00:00:47.895772 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Oct 30 00:00:47.895880 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] Oct 30 00:00:47.895995 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Oct 30 00:00:47.896097 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Oct 30 00:00:47.896192 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Oct 30 00:00:47.896297 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Oct 30 00:00:47.896395 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Oct 30 00:00:47.896543 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] Oct 30 00:00:47.896654 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] Oct 30 00:00:47.896749 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] Oct 30 00:00:47.896866 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 30 00:00:47.897048 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Oct 30 00:00:47.897164 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] Oct 30 00:00:47.897279 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] Oct 30 00:00:47.897376 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] Oct 30 00:00:47.897530 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Oct 30 00:00:47.897631 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] Oct 30 00:00:47.897728 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] Oct 30 00:00:47.897839 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] Oct 30 00:00:47.897947 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Oct 30 00:00:47.898044 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] Oct 30 00:00:47.898138 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] Oct 30 00:00:47.898240 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] Oct 30 00:00:47.898341 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Oct 30 00:00:47.898436 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] Oct 30 00:00:47.898579 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] Oct 30 00:00:47.898675 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] Oct 30 00:00:47.898792 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Oct 30 00:00:47.898889 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] Oct 30 00:00:47.898991 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] Oct 30 00:00:47.899088 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] Oct 30 00:00:47.899199 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Oct 30 00:00:47.899330 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] Oct 30 00:00:47.899459 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] Oct 30 00:00:47.899471 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 30 00:00:47.899486 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 30 00:00:47.899495 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 30 00:00:47.899504 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 30 00:00:47.899525 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Oct 30 00:00:47.899545 kernel: iommu: Default domain type: Translated Oct 30 00:00:47.899554 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 30 00:00:47.899564 kernel: PCI: Using ACPI for IRQ routing Oct 30 00:00:47.899573 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 30 00:00:47.899582 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 30 00:00:47.899596 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Oct 30 00:00:47.899728 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Oct 30 00:00:47.899827 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Oct 30 00:00:47.899939 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 30 00:00:47.899951 kernel: vgaarb: loaded Oct 30 00:00:47.899960 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 30 00:00:47.899970 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 30 00:00:47.899978 kernel: clocksource: Switched to clocksource kvm-clock Oct 30 00:00:47.899992 kernel: VFS: Disk quotas dquot_6.6.0 Oct 30 00:00:47.900002 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 30 00:00:47.900017 kernel: pnp: PnP ACPI init Oct 30 00:00:47.900030 kernel: pnp: PnP ACPI: found 4 devices Oct 30 00:00:47.900044 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 30 00:00:47.900058 kernel: NET: Registered PF_INET protocol family Oct 30 00:00:47.900069 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 30 00:00:47.900079 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 30 00:00:47.900088 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 30 00:00:47.900101 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 30 00:00:47.900110 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Oct 30 00:00:47.900119 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 30 00:00:47.900129 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 30 00:00:47.900138 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 30 00:00:47.900147 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 30 00:00:47.900155 kernel: NET: Registered PF_XDP protocol family Oct 30 00:00:47.900258 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 30 00:00:47.900387 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 30 00:00:47.900544 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 30 00:00:47.900638 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Oct 30 00:00:47.900730 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Oct 30 00:00:47.900872 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Oct 30 00:00:47.901003 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 30 00:00:47.901019 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Oct 30 00:00:47.901149 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 27276 usecs Oct 30 00:00:47.901170 kernel: PCI: CLS 0 bytes, default 64 Oct 30 00:00:47.901191 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Oct 30 00:00:47.901208 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Oct 30 00:00:47.901223 kernel: Initialise system trusted keyrings Oct 30 00:00:47.901239 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 30 00:00:47.901255 kernel: Key type asymmetric registered Oct 30 00:00:47.901270 kernel: Asymmetric key parser 'x509' registered Oct 30 00:00:47.901286 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 30 00:00:47.901303 kernel: io scheduler mq-deadline registered Oct 30 00:00:47.901319 kernel: io scheduler kyber registered Oct 30 00:00:47.901338 kernel: io scheduler bfq registered Oct 30 00:00:47.901354 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 30 00:00:47.901370 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Oct 30 00:00:47.901386 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Oct 30 00:00:47.901416 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Oct 30 00:00:47.901425 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 30 00:00:47.901434 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 30 00:00:47.901443 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 30 00:00:47.901453 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 30 00:00:47.901465 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 30 00:00:47.901602 kernel: rtc_cmos 00:03: RTC can wake from S4 Oct 30 00:00:47.901617 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 30 00:00:47.901702 kernel: rtc_cmos 00:03: registered as rtc0 Oct 30 00:00:47.901788 kernel: rtc_cmos 00:03: setting system clock to 2025-10-30T00:00:47 UTC (1761782447) Oct 30 00:00:47.901897 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Oct 30 00:00:47.901909 kernel: intel_pstate: CPU model not supported Oct 30 00:00:47.901922 kernel: NET: Registered PF_INET6 protocol family Oct 30 00:00:47.901931 kernel: Segment Routing with IPv6 Oct 30 00:00:47.901941 kernel: In-situ OAM (IOAM) with IPv6 Oct 30 00:00:47.901950 kernel: NET: Registered PF_PACKET protocol family Oct 30 00:00:47.901959 kernel: Key type dns_resolver registered Oct 30 00:00:47.901968 kernel: IPI shorthand broadcast: enabled Oct 30 00:00:47.901978 kernel: sched_clock: Marking stable (3508005134, 161435722)->(3699704651, -30263795) Oct 30 00:00:47.901987 kernel: registered taskstats version 1 Oct 30 00:00:47.901997 kernel: Loading compiled-in X.509 certificates Oct 30 00:00:47.902006 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 815fc40077fbc06b8d9e8a6016fea83aecff0a2a' Oct 30 00:00:47.902018 kernel: Demotion targets for Node 0: null Oct 30 00:00:47.902027 kernel: Key type .fscrypt registered Oct 30 00:00:47.902036 kernel: Key type fscrypt-provisioning registered Oct 30 00:00:47.902063 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 30 00:00:47.902076 kernel: ima: Allocated hash algorithm: sha1 Oct 30 00:00:47.902086 kernel: ima: No architecture policies found Oct 30 00:00:47.902095 kernel: clk: Disabling unused clocks Oct 30 00:00:47.902105 kernel: Warning: unable to open an initial console. Oct 30 00:00:47.902118 kernel: Freeing unused kernel image (initmem) memory: 45544K Oct 30 00:00:47.902128 kernel: Write protecting the kernel read-only data: 40960k Oct 30 00:00:47.902138 kernel: Freeing unused kernel image (rodata/data gap) memory: 576K Oct 30 00:00:47.902147 kernel: Run /init as init process Oct 30 00:00:47.902157 kernel: with arguments: Oct 30 00:00:47.902167 kernel: /init Oct 30 00:00:47.902177 kernel: with environment: Oct 30 00:00:47.902186 kernel: HOME=/ Oct 30 00:00:47.902196 kernel: TERM=linux Oct 30 00:00:47.902207 systemd[1]: Successfully made /usr/ read-only. Oct 30 00:00:47.902223 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 30 00:00:47.902234 systemd[1]: Detected virtualization kvm. Oct 30 00:00:47.902243 systemd[1]: Detected architecture x86-64. Oct 30 00:00:47.902253 systemd[1]: Running in initrd. Oct 30 00:00:47.902263 systemd[1]: No hostname configured, using default hostname. Oct 30 00:00:47.902274 systemd[1]: Hostname set to . Oct 30 00:00:47.902284 systemd[1]: Initializing machine ID from VM UUID. Oct 30 00:00:47.902297 systemd[1]: Queued start job for default target initrd.target. Oct 30 00:00:47.902307 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 30 00:00:47.902317 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 30 00:00:47.902327 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 30 00:00:47.902340 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 30 00:00:47.902350 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 30 00:00:47.902364 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 30 00:00:47.902377 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 30 00:00:47.902392 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 30 00:00:47.902406 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 30 00:00:47.902418 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 30 00:00:47.902435 systemd[1]: Reached target paths.target - Path Units. Oct 30 00:00:47.902449 systemd[1]: Reached target slices.target - Slice Units. Oct 30 00:00:47.902465 systemd[1]: Reached target swap.target - Swaps. Oct 30 00:00:47.902481 systemd[1]: Reached target timers.target - Timer Units. Oct 30 00:00:47.902493 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 30 00:00:47.902503 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 30 00:00:47.902548 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 30 00:00:47.902559 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 30 00:00:47.902569 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 30 00:00:47.902583 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 30 00:00:47.902593 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 30 00:00:47.902604 systemd[1]: Reached target sockets.target - Socket Units. Oct 30 00:00:47.902626 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 30 00:00:47.902636 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 30 00:00:47.902646 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 30 00:00:47.902656 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 30 00:00:47.902666 systemd[1]: Starting systemd-fsck-usr.service... Oct 30 00:00:47.902675 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 30 00:00:47.902688 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 30 00:00:47.902698 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 00:00:47.902708 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 30 00:00:47.902769 systemd-journald[193]: Collecting audit messages is disabled. Oct 30 00:00:47.902797 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 30 00:00:47.902809 systemd-journald[193]: Journal started Oct 30 00:00:47.902833 systemd-journald[193]: Runtime Journal (/run/log/journal/795ebf6c13d94577b3369c078c6ace9f) is 4.9M, max 39.2M, 34.3M free. Oct 30 00:00:47.910551 systemd[1]: Started systemd-journald.service - Journal Service. Oct 30 00:00:47.912472 systemd-modules-load[195]: Inserted module 'overlay' Oct 30 00:00:47.913269 systemd[1]: Finished systemd-fsck-usr.service. Oct 30 00:00:47.918745 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 30 00:00:47.922660 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 30 00:00:47.953558 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 30 00:00:47.952211 systemd-tmpfiles[204]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 30 00:00:48.008327 kernel: Bridge firewalling registered Oct 30 00:00:47.954638 systemd-modules-load[195]: Inserted module 'br_netfilter' Oct 30 00:00:47.956064 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 30 00:00:48.009203 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 00:00:48.009997 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 30 00:00:48.010885 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 30 00:00:48.014335 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 30 00:00:48.015774 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 30 00:00:48.018770 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 30 00:00:48.035295 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 30 00:00:48.043306 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 30 00:00:48.046420 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 30 00:00:48.050969 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 30 00:00:48.053654 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 30 00:00:48.078699 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=e5fe4ef982f4bbc75df9f63e805c4ec086c6d95878919f55fe8c638c4d2b3b13 Oct 30 00:00:48.100764 systemd-resolved[229]: Positive Trust Anchors: Oct 30 00:00:48.101473 systemd-resolved[229]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 30 00:00:48.101626 systemd-resolved[229]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 30 00:00:48.107482 systemd-resolved[229]: Defaulting to hostname 'linux'. Oct 30 00:00:48.108772 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 30 00:00:48.109364 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 30 00:00:48.198545 kernel: SCSI subsystem initialized Oct 30 00:00:48.210601 kernel: Loading iSCSI transport class v2.0-870. Oct 30 00:00:48.222627 kernel: iscsi: registered transport (tcp) Oct 30 00:00:48.246855 kernel: iscsi: registered transport (qla4xxx) Oct 30 00:00:48.246963 kernel: QLogic iSCSI HBA Driver Oct 30 00:00:48.272703 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 30 00:00:48.302705 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 30 00:00:48.305516 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 30 00:00:48.372371 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 30 00:00:48.375629 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 30 00:00:48.448618 kernel: raid6: avx2x4 gen() 12596 MB/s Oct 30 00:00:48.464632 kernel: raid6: avx2x2 gen() 12836 MB/s Oct 30 00:00:48.483175 kernel: raid6: avx2x1 gen() 9833 MB/s Oct 30 00:00:48.483282 kernel: raid6: using algorithm avx2x2 gen() 12836 MB/s Oct 30 00:00:48.507069 kernel: raid6: .... xor() 11200 MB/s, rmw enabled Oct 30 00:00:48.507165 kernel: raid6: using avx2x2 recovery algorithm Oct 30 00:00:48.538563 kernel: xor: automatically using best checksumming function avx Oct 30 00:00:48.752547 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 30 00:00:48.763247 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 30 00:00:48.766744 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 30 00:00:48.799376 systemd-udevd[441]: Using default interface naming scheme 'v255'. Oct 30 00:00:48.807386 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 30 00:00:48.812686 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 30 00:00:48.846894 dracut-pre-trigger[451]: rd.md=0: removing MD RAID activation Oct 30 00:00:48.887580 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 30 00:00:48.890550 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 30 00:00:48.984828 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 30 00:00:48.988609 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 30 00:00:49.075537 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues Oct 30 00:00:49.093125 kernel: scsi host0: Virtio SCSI HBA Oct 30 00:00:49.107591 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Oct 30 00:00:49.129686 kernel: libata version 3.00 loaded. Oct 30 00:00:49.138929 kernel: ata_piix 0000:00:01.1: version 2.13 Oct 30 00:00:49.141698 kernel: cryptd: max_cpu_qlen set to 1000 Oct 30 00:00:49.141727 kernel: scsi host1: ata_piix Oct 30 00:00:49.144730 kernel: scsi host2: ata_piix Oct 30 00:00:49.145026 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Oct 30 00:00:49.145148 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 Oct 30 00:00:49.149283 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 Oct 30 00:00:49.158548 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Oct 30 00:00:49.175392 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 30 00:00:49.188942 kernel: ACPI: bus type USB registered Oct 30 00:00:49.188994 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 30 00:00:49.189015 kernel: GPT:9289727 != 125829119 Oct 30 00:00:49.189034 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 30 00:00:49.189051 kernel: GPT:9289727 != 125829119 Oct 30 00:00:49.189069 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 30 00:00:49.189086 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 30 00:00:49.189112 kernel: usbcore: registered new interface driver usbfs Oct 30 00:00:49.189131 kernel: usbcore: registered new interface driver hub Oct 30 00:00:49.189155 kernel: usbcore: registered new device driver usb Oct 30 00:00:49.189172 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Oct 30 00:00:49.175570 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 00:00:49.190102 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 00:00:49.192253 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 00:00:49.193223 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Oct 30 00:00:49.199659 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Oct 30 00:00:49.302669 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 00:00:49.327564 kernel: AES CTR mode by8 optimization enabled Oct 30 00:00:49.410371 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 30 00:00:49.423714 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Oct 30 00:00:49.424123 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Oct 30 00:00:49.426046 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Oct 30 00:00:49.426694 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Oct 30 00:00:49.428625 kernel: hub 1-0:1.0: USB hub found Oct 30 00:00:49.429667 kernel: hub 1-0:1.0: 2 ports detected Oct 30 00:00:49.432920 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 30 00:00:49.434743 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 30 00:00:49.447977 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 30 00:00:49.459090 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 30 00:00:49.460879 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 30 00:00:49.463142 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 30 00:00:49.464600 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 30 00:00:49.465905 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 30 00:00:49.468300 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 30 00:00:49.470651 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 30 00:00:49.495875 disk-uuid[597]: Primary Header is updated. Oct 30 00:00:49.495875 disk-uuid[597]: Secondary Entries is updated. Oct 30 00:00:49.495875 disk-uuid[597]: Secondary Header is updated. Oct 30 00:00:49.508098 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 30 00:00:49.519665 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 30 00:00:50.528590 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 30 00:00:50.529811 disk-uuid[599]: The operation has completed successfully. Oct 30 00:00:50.584190 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 30 00:00:50.584314 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 30 00:00:50.621213 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 30 00:00:50.647884 sh[616]: Success Oct 30 00:00:50.668618 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 30 00:00:50.668725 kernel: device-mapper: uevent: version 1.0.3 Oct 30 00:00:50.669671 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 30 00:00:50.680524 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Oct 30 00:00:50.731961 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 30 00:00:50.735629 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 30 00:00:50.747923 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 30 00:00:50.759550 kernel: BTRFS: device fsid ad8523d8-35e6-44b9-a685-e8d871101da4 devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (628) Oct 30 00:00:50.763041 kernel: BTRFS info (device dm-0): first mount of filesystem ad8523d8-35e6-44b9-a685-e8d871101da4 Oct 30 00:00:50.763126 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 30 00:00:50.770880 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 30 00:00:50.770970 kernel: BTRFS info (device dm-0): enabling free space tree Oct 30 00:00:50.773236 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 30 00:00:50.774248 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 30 00:00:50.775602 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 30 00:00:50.776653 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 30 00:00:50.779751 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 30 00:00:50.817834 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (662) Oct 30 00:00:50.817910 kernel: BTRFS info (device vda6): first mount of filesystem 20cadb25-62ee-49b8-9ff8-7ba27828b77e Oct 30 00:00:50.822557 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 30 00:00:50.828812 kernel: BTRFS info (device vda6): turning on async discard Oct 30 00:00:50.828889 kernel: BTRFS info (device vda6): enabling free space tree Oct 30 00:00:50.835550 kernel: BTRFS info (device vda6): last unmount of filesystem 20cadb25-62ee-49b8-9ff8-7ba27828b77e Oct 30 00:00:50.837251 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 30 00:00:50.841744 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 30 00:00:50.959200 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 30 00:00:50.963038 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 30 00:00:51.025575 systemd-networkd[801]: lo: Link UP Oct 30 00:00:51.025587 systemd-networkd[801]: lo: Gained carrier Oct 30 00:00:51.028589 systemd-networkd[801]: Enumeration completed Oct 30 00:00:51.028730 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 30 00:00:51.030093 systemd[1]: Reached target network.target - Network. Oct 30 00:00:51.030192 systemd-networkd[801]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Oct 30 00:00:51.030198 systemd-networkd[801]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Oct 30 00:00:51.032561 systemd-networkd[801]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 30 00:00:51.032568 systemd-networkd[801]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 30 00:00:51.034696 systemd-networkd[801]: eth0: Link UP Oct 30 00:00:51.034941 systemd-networkd[801]: eth1: Link UP Oct 30 00:00:51.035115 systemd-networkd[801]: eth0: Gained carrier Oct 30 00:00:51.035131 systemd-networkd[801]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Oct 30 00:00:51.049171 systemd-networkd[801]: eth1: Gained carrier Oct 30 00:00:51.049196 systemd-networkd[801]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 30 00:00:51.065665 systemd-networkd[801]: eth0: DHCPv4 address 143.198.78.203/20, gateway 143.198.64.1 acquired from 169.254.169.253 Oct 30 00:00:51.073922 ignition[710]: Ignition 2.22.0 Oct 30 00:00:51.074743 ignition[710]: Stage: fetch-offline Oct 30 00:00:51.074792 ignition[710]: no configs at "/usr/lib/ignition/base.d" Oct 30 00:00:51.074801 ignition[710]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 30 00:00:51.074895 ignition[710]: parsed url from cmdline: "" Oct 30 00:00:51.074899 ignition[710]: no config URL provided Oct 30 00:00:51.074904 ignition[710]: reading system config file "/usr/lib/ignition/user.ign" Oct 30 00:00:51.079591 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 30 00:00:51.074912 ignition[710]: no config at "/usr/lib/ignition/user.ign" Oct 30 00:00:51.074917 ignition[710]: failed to fetch config: resource requires networking Oct 30 00:00:51.082635 systemd-networkd[801]: eth1: DHCPv4 address 10.124.0.17/20 acquired from 169.254.169.253 Oct 30 00:00:51.075102 ignition[710]: Ignition finished successfully Oct 30 00:00:51.085661 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 30 00:00:51.125184 ignition[811]: Ignition 2.22.0 Oct 30 00:00:51.125203 ignition[811]: Stage: fetch Oct 30 00:00:51.125409 ignition[811]: no configs at "/usr/lib/ignition/base.d" Oct 30 00:00:51.125425 ignition[811]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 30 00:00:51.125609 ignition[811]: parsed url from cmdline: "" Oct 30 00:00:51.125614 ignition[811]: no config URL provided Oct 30 00:00:51.125623 ignition[811]: reading system config file "/usr/lib/ignition/user.ign" Oct 30 00:00:51.125637 ignition[811]: no config at "/usr/lib/ignition/user.ign" Oct 30 00:00:51.125670 ignition[811]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Oct 30 00:00:51.144050 ignition[811]: GET result: OK Oct 30 00:00:51.144998 ignition[811]: parsing config with SHA512: d50c28df48df60801fa0b2e7e2181570ee670cf45124f6a9b1b7c0b2986dd23c36d311c3bf189dd959058c0e80e2b3e04e3cf1bc00e55992435f391db0d19d34 Oct 30 00:00:51.152091 unknown[811]: fetched base config from "system" Oct 30 00:00:51.152107 unknown[811]: fetched base config from "system" Oct 30 00:00:51.153473 ignition[811]: fetch: fetch complete Oct 30 00:00:51.152116 unknown[811]: fetched user config from "digitalocean" Oct 30 00:00:51.153483 ignition[811]: fetch: fetch passed Oct 30 00:00:51.153589 ignition[811]: Ignition finished successfully Oct 30 00:00:51.158268 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 30 00:00:51.161300 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 30 00:00:51.213756 ignition[817]: Ignition 2.22.0 Oct 30 00:00:51.213780 ignition[817]: Stage: kargs Oct 30 00:00:51.214081 ignition[817]: no configs at "/usr/lib/ignition/base.d" Oct 30 00:00:51.214100 ignition[817]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 30 00:00:51.215795 ignition[817]: kargs: kargs passed Oct 30 00:00:51.215896 ignition[817]: Ignition finished successfully Oct 30 00:00:51.219604 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 30 00:00:51.222810 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 30 00:00:51.264612 ignition[823]: Ignition 2.22.0 Oct 30 00:00:51.264629 ignition[823]: Stage: disks Oct 30 00:00:51.264889 ignition[823]: no configs at "/usr/lib/ignition/base.d" Oct 30 00:00:51.264907 ignition[823]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 30 00:00:51.268367 ignition[823]: disks: disks passed Oct 30 00:00:51.268465 ignition[823]: Ignition finished successfully Oct 30 00:00:51.271361 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 30 00:00:51.273005 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 30 00:00:51.274027 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 30 00:00:51.275323 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 30 00:00:51.276663 systemd[1]: Reached target sysinit.target - System Initialization. Oct 30 00:00:51.277686 systemd[1]: Reached target basic.target - Basic System. Oct 30 00:00:51.280178 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 30 00:00:51.317230 systemd-fsck[831]: ROOT: clean, 15/553520 files, 52789/553472 blocks Oct 30 00:00:51.321552 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 30 00:00:51.325533 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 30 00:00:51.489561 kernel: EXT4-fs (vda9): mounted filesystem 02607114-2ead-44bc-a76e-2d51f82b108e r/w with ordered data mode. Quota mode: none. Oct 30 00:00:51.490382 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 30 00:00:51.492606 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 30 00:00:51.496642 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 30 00:00:51.499525 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 30 00:00:51.510086 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Oct 30 00:00:51.515037 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Oct 30 00:00:51.516893 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 30 00:00:51.517889 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 30 00:00:51.523573 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 30 00:00:51.530073 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 30 00:00:51.546037 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (839) Oct 30 00:00:51.546071 kernel: BTRFS info (device vda6): first mount of filesystem 20cadb25-62ee-49b8-9ff8-7ba27828b77e Oct 30 00:00:51.546085 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 30 00:00:51.546098 kernel: BTRFS info (device vda6): turning on async discard Oct 30 00:00:51.546111 kernel: BTRFS info (device vda6): enabling free space tree Oct 30 00:00:51.548296 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 30 00:00:51.612617 coreos-metadata[841]: Oct 30 00:00:51.611 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Oct 30 00:00:51.627530 coreos-metadata[841]: Oct 30 00:00:51.626 INFO Fetch successful Oct 30 00:00:51.639640 initrd-setup-root[869]: cut: /sysroot/etc/passwd: No such file or directory Oct 30 00:00:51.640096 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Oct 30 00:00:51.643478 coreos-metadata[842]: Oct 30 00:00:51.640 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Oct 30 00:00:51.640259 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Oct 30 00:00:51.649005 initrd-setup-root[877]: cut: /sysroot/etc/group: No such file or directory Oct 30 00:00:51.655084 coreos-metadata[842]: Oct 30 00:00:51.655 INFO Fetch successful Oct 30 00:00:51.658792 initrd-setup-root[884]: cut: /sysroot/etc/shadow: No such file or directory Oct 30 00:00:51.663235 coreos-metadata[842]: Oct 30 00:00:51.663 INFO wrote hostname ci-4459.1.0-n-705ef66fdc to /sysroot/etc/hostname Oct 30 00:00:51.666279 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 30 00:00:51.670830 initrd-setup-root[892]: cut: /sysroot/etc/gshadow: No such file or directory Oct 30 00:00:51.802900 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 30 00:00:51.805777 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 30 00:00:51.808784 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 30 00:00:51.828919 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 30 00:00:51.832641 kernel: BTRFS info (device vda6): last unmount of filesystem 20cadb25-62ee-49b8-9ff8-7ba27828b77e Oct 30 00:00:51.851042 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 30 00:00:51.877571 ignition[960]: INFO : Ignition 2.22.0 Oct 30 00:00:51.877571 ignition[960]: INFO : Stage: mount Oct 30 00:00:51.879868 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 30 00:00:51.879868 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 30 00:00:51.879868 ignition[960]: INFO : mount: mount passed Oct 30 00:00:51.879868 ignition[960]: INFO : Ignition finished successfully Oct 30 00:00:51.881699 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 30 00:00:51.884376 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 30 00:00:51.913113 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 30 00:00:51.955536 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (973) Oct 30 00:00:51.959903 kernel: BTRFS info (device vda6): first mount of filesystem 20cadb25-62ee-49b8-9ff8-7ba27828b77e Oct 30 00:00:51.959983 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 30 00:00:51.965314 kernel: BTRFS info (device vda6): turning on async discard Oct 30 00:00:51.965406 kernel: BTRFS info (device vda6): enabling free space tree Oct 30 00:00:51.969266 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 30 00:00:52.012538 ignition[989]: INFO : Ignition 2.22.0 Oct 30 00:00:52.012538 ignition[989]: INFO : Stage: files Oct 30 00:00:52.014302 ignition[989]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 30 00:00:52.014302 ignition[989]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 30 00:00:52.014302 ignition[989]: DEBUG : files: compiled without relabeling support, skipping Oct 30 00:00:52.017103 ignition[989]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 30 00:00:52.017103 ignition[989]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 30 00:00:52.021384 ignition[989]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 30 00:00:52.022575 ignition[989]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 30 00:00:52.024086 unknown[989]: wrote ssh authorized keys file for user: core Oct 30 00:00:52.024974 ignition[989]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 30 00:00:52.028558 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 30 00:00:52.028558 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Oct 30 00:00:52.140986 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 30 00:00:52.281691 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 30 00:00:52.282625 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 30 00:00:52.283366 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 30 00:00:52.283366 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 30 00:00:52.284836 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 30 00:00:52.284836 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 30 00:00:52.284836 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 30 00:00:52.284836 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 30 00:00:52.284836 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 30 00:00:52.294016 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 30 00:00:52.294016 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 30 00:00:52.294016 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 30 00:00:52.294016 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 30 00:00:52.294016 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 30 00:00:52.294016 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Oct 30 00:00:52.589681 systemd-networkd[801]: eth1: Gained IPv6LL Oct 30 00:00:52.672597 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 30 00:00:52.975024 systemd-networkd[801]: eth0: Gained IPv6LL Oct 30 00:00:53.032266 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 30 00:00:53.032266 ignition[989]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 30 00:00:53.034995 ignition[989]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 30 00:00:53.034995 ignition[989]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 30 00:00:53.034995 ignition[989]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 30 00:00:53.034995 ignition[989]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Oct 30 00:00:53.034995 ignition[989]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Oct 30 00:00:53.034995 ignition[989]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 30 00:00:53.041271 ignition[989]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 30 00:00:53.041271 ignition[989]: INFO : files: files passed Oct 30 00:00:53.041271 ignition[989]: INFO : Ignition finished successfully Oct 30 00:00:53.038191 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 30 00:00:53.041795 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 30 00:00:53.046437 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 30 00:00:53.066154 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 30 00:00:53.066320 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 30 00:00:53.076361 initrd-setup-root-after-ignition[1020]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 30 00:00:53.076361 initrd-setup-root-after-ignition[1020]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 30 00:00:53.079267 initrd-setup-root-after-ignition[1024]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 30 00:00:53.079932 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 30 00:00:53.081562 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 30 00:00:53.083477 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 30 00:00:53.143729 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 30 00:00:53.143919 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 30 00:00:53.145300 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 30 00:00:53.146058 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 30 00:00:53.147127 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 30 00:00:53.148587 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 30 00:00:53.173785 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 30 00:00:53.177124 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 30 00:00:53.207093 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 30 00:00:53.208587 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 30 00:00:53.209780 systemd[1]: Stopped target timers.target - Timer Units. Oct 30 00:00:53.210735 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 30 00:00:53.210989 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 30 00:00:53.212970 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 30 00:00:53.213769 systemd[1]: Stopped target basic.target - Basic System. Oct 30 00:00:53.214708 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 30 00:00:53.215759 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 30 00:00:53.216742 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 30 00:00:53.217946 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 30 00:00:53.218993 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 30 00:00:53.220141 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 30 00:00:53.221231 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 30 00:00:53.222394 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 30 00:00:53.223363 systemd[1]: Stopped target swap.target - Swaps. Oct 30 00:00:53.224364 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 30 00:00:53.224651 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 30 00:00:53.226300 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 30 00:00:53.227071 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 30 00:00:53.228289 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 30 00:00:53.228446 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 30 00:00:53.229553 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 30 00:00:53.229816 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 30 00:00:53.231549 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 30 00:00:53.231822 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 30 00:00:53.232977 systemd[1]: ignition-files.service: Deactivated successfully. Oct 30 00:00:53.233227 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 30 00:00:53.234648 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Oct 30 00:00:53.234887 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 30 00:00:53.238651 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 30 00:00:53.243000 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 30 00:00:53.243793 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 30 00:00:53.244141 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 30 00:00:53.246812 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 30 00:00:53.247016 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 30 00:00:53.259429 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 30 00:00:53.260629 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 30 00:00:53.289483 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 30 00:00:53.293443 ignition[1044]: INFO : Ignition 2.22.0 Oct 30 00:00:53.293443 ignition[1044]: INFO : Stage: umount Oct 30 00:00:53.294817 ignition[1044]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 30 00:00:53.294817 ignition[1044]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 30 00:00:53.297129 ignition[1044]: INFO : umount: umount passed Oct 30 00:00:53.297129 ignition[1044]: INFO : Ignition finished successfully Oct 30 00:00:53.299762 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 30 00:00:53.299948 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 30 00:00:53.301486 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 30 00:00:53.301595 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 30 00:00:53.303171 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 30 00:00:53.303290 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 30 00:00:53.304992 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 30 00:00:53.305086 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 30 00:00:53.306170 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 30 00:00:53.306246 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 30 00:00:53.307351 systemd[1]: Stopped target network.target - Network. Oct 30 00:00:53.308396 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 30 00:00:53.308479 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 30 00:00:53.309572 systemd[1]: Stopped target paths.target - Path Units. Oct 30 00:00:53.310576 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 30 00:00:53.314878 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 30 00:00:53.315733 systemd[1]: Stopped target slices.target - Slice Units. Oct 30 00:00:53.316873 systemd[1]: Stopped target sockets.target - Socket Units. Oct 30 00:00:53.317900 systemd[1]: iscsid.socket: Deactivated successfully. Oct 30 00:00:53.317957 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 30 00:00:53.318906 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 30 00:00:53.318954 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 30 00:00:53.319982 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 30 00:00:53.320065 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 30 00:00:53.320991 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 30 00:00:53.321057 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 30 00:00:53.321950 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 30 00:00:53.322025 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 30 00:00:53.323162 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 30 00:00:53.324256 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 30 00:00:53.333237 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 30 00:00:53.333373 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 30 00:00:53.337857 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Oct 30 00:00:53.338953 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 30 00:00:53.340243 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 30 00:00:53.343226 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Oct 30 00:00:53.345660 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 30 00:00:53.346319 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 30 00:00:53.346369 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 30 00:00:53.348840 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 30 00:00:53.350859 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 30 00:00:53.350972 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 30 00:00:53.353041 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 30 00:00:53.353111 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 30 00:00:53.356095 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 30 00:00:53.356169 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 30 00:00:53.358556 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 30 00:00:53.358638 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 30 00:00:53.360229 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 30 00:00:53.366223 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 30 00:00:53.366353 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Oct 30 00:00:53.377829 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 30 00:00:53.378913 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 30 00:00:53.380072 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 30 00:00:53.380135 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 30 00:00:53.380844 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 30 00:00:53.380894 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 30 00:00:53.382185 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 30 00:00:53.382263 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 30 00:00:53.384256 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 30 00:00:53.384340 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 30 00:00:53.385430 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 30 00:00:53.385533 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 30 00:00:53.388760 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 30 00:00:53.389842 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 30 00:00:53.389930 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 30 00:00:53.393175 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 30 00:00:53.393254 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 30 00:00:53.395284 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 30 00:00:53.395349 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 30 00:00:53.397706 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 30 00:00:53.397760 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 30 00:00:53.399115 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 30 00:00:53.399175 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 00:00:53.404675 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Oct 30 00:00:53.404782 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Oct 30 00:00:53.404835 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Oct 30 00:00:53.404893 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Oct 30 00:00:53.405409 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 30 00:00:53.407592 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 30 00:00:53.413351 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 30 00:00:53.413792 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 30 00:00:53.415336 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 30 00:00:53.417050 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 30 00:00:53.441034 systemd[1]: Switching root. Oct 30 00:00:53.476311 systemd-journald[193]: Journal stopped Oct 30 00:00:54.640039 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Oct 30 00:00:54.640116 kernel: SELinux: policy capability network_peer_controls=1 Oct 30 00:00:54.640133 kernel: SELinux: policy capability open_perms=1 Oct 30 00:00:54.640149 kernel: SELinux: policy capability extended_socket_class=1 Oct 30 00:00:54.640160 kernel: SELinux: policy capability always_check_network=0 Oct 30 00:00:54.640172 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 30 00:00:54.640184 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 30 00:00:54.640195 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 30 00:00:54.640223 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 30 00:00:54.640235 kernel: SELinux: policy capability userspace_initial_context=0 Oct 30 00:00:54.640246 kernel: audit: type=1403 audit(1761782453.675:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 30 00:00:54.640262 systemd[1]: Successfully loaded SELinux policy in 78.579ms. Oct 30 00:00:54.640286 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.931ms. Oct 30 00:00:54.640301 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 30 00:00:54.640321 systemd[1]: Detected virtualization kvm. Oct 30 00:00:54.640334 systemd[1]: Detected architecture x86-64. Oct 30 00:00:54.640347 systemd[1]: Detected first boot. Oct 30 00:00:54.640359 systemd[1]: Hostname set to . Oct 30 00:00:54.640371 systemd[1]: Initializing machine ID from VM UUID. Oct 30 00:00:54.640383 zram_generator::config[1087]: No configuration found. Oct 30 00:00:54.640404 kernel: Guest personality initialized and is inactive Oct 30 00:00:54.640416 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 30 00:00:54.640427 kernel: Initialized host personality Oct 30 00:00:54.640443 kernel: NET: Registered PF_VSOCK protocol family Oct 30 00:00:54.640455 systemd[1]: Populated /etc with preset unit settings. Oct 30 00:00:54.640468 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Oct 30 00:00:54.640481 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 30 00:00:54.640492 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 30 00:00:54.640545 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 30 00:00:54.640559 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 30 00:00:54.640579 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 30 00:00:54.640591 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 30 00:00:54.640604 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 30 00:00:54.640618 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 30 00:00:54.640630 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 30 00:00:54.640643 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 30 00:00:54.640655 systemd[1]: Created slice user.slice - User and Session Slice. Oct 30 00:00:54.640675 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 30 00:00:54.640687 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 30 00:00:54.640700 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 30 00:00:54.640712 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 30 00:00:54.640726 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 30 00:00:54.640739 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 30 00:00:54.640758 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 30 00:00:54.640770 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 30 00:00:54.640782 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 30 00:00:54.640795 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 30 00:00:54.640807 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 30 00:00:54.640819 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 30 00:00:54.640832 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 30 00:00:54.640844 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 30 00:00:54.640862 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 30 00:00:54.640878 systemd[1]: Reached target slices.target - Slice Units. Oct 30 00:00:54.640891 systemd[1]: Reached target swap.target - Swaps. Oct 30 00:00:54.640903 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 30 00:00:54.640916 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 30 00:00:54.640933 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 30 00:00:54.640951 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 30 00:00:54.640970 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 30 00:00:54.640990 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 30 00:00:54.641009 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 30 00:00:54.641032 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 30 00:00:54.641053 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 30 00:00:54.641073 systemd[1]: Mounting media.mount - External Media Directory... Oct 30 00:00:54.641094 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 00:00:54.641113 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 30 00:00:54.641126 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 30 00:00:54.641139 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 30 00:00:54.641152 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 30 00:00:54.641165 systemd[1]: Reached target machines.target - Containers. Oct 30 00:00:54.641189 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 30 00:00:54.641210 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 30 00:00:54.641238 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 30 00:00:54.641257 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 30 00:00:54.641272 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 30 00:00:54.641285 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 30 00:00:54.641297 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 30 00:00:54.641309 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 30 00:00:54.641325 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 30 00:00:54.641338 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 30 00:00:54.641378 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 30 00:00:54.641391 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 30 00:00:54.641403 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 30 00:00:54.641416 systemd[1]: Stopped systemd-fsck-usr.service. Oct 30 00:00:54.641431 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 30 00:00:54.641443 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 30 00:00:54.641456 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 30 00:00:54.641468 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 30 00:00:54.641480 kernel: fuse: init (API version 7.41) Oct 30 00:00:54.641492 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 30 00:00:54.641604 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 30 00:00:54.641622 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 30 00:00:54.641638 systemd[1]: verity-setup.service: Deactivated successfully. Oct 30 00:00:54.641659 systemd[1]: Stopped verity-setup.service. Oct 30 00:00:54.641672 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 00:00:54.641685 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 30 00:00:54.641698 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 30 00:00:54.641713 systemd[1]: Mounted media.mount - External Media Directory. Oct 30 00:00:54.641725 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 30 00:00:54.641738 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 30 00:00:54.641750 kernel: loop: module loaded Oct 30 00:00:54.641762 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 30 00:00:54.642596 systemd-journald[1161]: Collecting audit messages is disabled. Oct 30 00:00:54.642646 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 30 00:00:54.642666 systemd-journald[1161]: Journal started Oct 30 00:00:54.642691 systemd-journald[1161]: Runtime Journal (/run/log/journal/795ebf6c13d94577b3369c078c6ace9f) is 4.9M, max 39.2M, 34.3M free. Oct 30 00:00:54.332213 systemd[1]: Queued start job for default target multi-user.target. Oct 30 00:00:54.357459 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 30 00:00:54.358085 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 30 00:00:54.644608 systemd[1]: Started systemd-journald.service - Journal Service. Oct 30 00:00:54.646597 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 30 00:00:54.647697 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 30 00:00:54.649063 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 30 00:00:54.650694 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 30 00:00:54.651665 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 30 00:00:54.651854 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 30 00:00:54.652842 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 30 00:00:54.653443 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 30 00:00:54.656281 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 30 00:00:54.656447 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 30 00:00:54.657253 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 30 00:00:54.657993 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 30 00:00:54.685760 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 30 00:00:54.690640 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 30 00:00:54.691222 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 30 00:00:54.691264 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 30 00:00:54.693995 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 30 00:00:54.700031 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 30 00:00:54.702737 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 30 00:00:54.712715 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 30 00:00:54.721352 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 30 00:00:54.723725 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 30 00:00:54.727784 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 30 00:00:54.728351 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 30 00:00:54.730911 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 30 00:00:54.734801 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 30 00:00:54.742571 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 30 00:00:54.745578 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 30 00:00:54.746485 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 30 00:00:54.747318 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 30 00:00:54.748022 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 30 00:00:54.748642 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 30 00:00:54.753090 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 30 00:00:54.773078 systemd-journald[1161]: Time spent on flushing to /var/log/journal/795ebf6c13d94577b3369c078c6ace9f is 31.610ms for 1010 entries. Oct 30 00:00:54.773078 systemd-journald[1161]: System Journal (/var/log/journal/795ebf6c13d94577b3369c078c6ace9f) is 8M, max 195.6M, 187.6M free. Oct 30 00:00:54.828754 systemd-journald[1161]: Received client request to flush runtime journal. Oct 30 00:00:54.828812 kernel: ACPI: bus type drm_connector registered Oct 30 00:00:54.779468 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 30 00:00:54.779726 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 30 00:00:54.833131 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Oct 30 00:00:54.833145 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Oct 30 00:00:54.833342 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 30 00:00:54.839157 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 30 00:00:54.843362 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 30 00:00:54.845532 kernel: loop0: detected capacity change from 0 to 229808 Oct 30 00:00:54.853196 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 30 00:00:54.861967 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 30 00:00:54.863235 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 30 00:00:54.868638 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 30 00:00:54.907546 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 30 00:00:54.931118 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 30 00:00:54.940564 kernel: loop1: detected capacity change from 0 to 128016 Oct 30 00:00:54.953279 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 30 00:00:54.959106 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 30 00:00:54.969823 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 30 00:00:54.994038 kernel: loop2: detected capacity change from 0 to 110984 Oct 30 00:00:55.004569 systemd-tmpfiles[1234]: ACLs are not supported, ignoring. Oct 30 00:00:55.004590 systemd-tmpfiles[1234]: ACLs are not supported, ignoring. Oct 30 00:00:55.013437 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 30 00:00:55.034587 kernel: loop3: detected capacity change from 0 to 8 Oct 30 00:00:55.058763 kernel: loop4: detected capacity change from 0 to 229808 Oct 30 00:00:55.079555 kernel: loop5: detected capacity change from 0 to 128016 Oct 30 00:00:55.100528 kernel: loop6: detected capacity change from 0 to 110984 Oct 30 00:00:55.127539 kernel: loop7: detected capacity change from 0 to 8 Oct 30 00:00:55.131119 (sd-merge)[1239]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Oct 30 00:00:55.132396 (sd-merge)[1239]: Merged extensions into '/usr'. Oct 30 00:00:55.142641 systemd[1]: Reload requested from client PID 1209 ('systemd-sysext') (unit systemd-sysext.service)... Oct 30 00:00:55.142668 systemd[1]: Reloading... Oct 30 00:00:55.328633 zram_generator::config[1265]: No configuration found. Oct 30 00:00:55.501497 ldconfig[1204]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 30 00:00:55.637044 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 30 00:00:55.637254 systemd[1]: Reloading finished in 494 ms. Oct 30 00:00:55.651238 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 30 00:00:55.656444 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 30 00:00:55.666406 systemd[1]: Starting ensure-sysext.service... Oct 30 00:00:55.669764 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 30 00:00:55.697963 systemd[1]: Reload requested from client PID 1309 ('systemctl') (unit ensure-sysext.service)... Oct 30 00:00:55.697982 systemd[1]: Reloading... Oct 30 00:00:55.723969 systemd-tmpfiles[1310]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 30 00:00:55.724000 systemd-tmpfiles[1310]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 30 00:00:55.724270 systemd-tmpfiles[1310]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 30 00:00:55.724557 systemd-tmpfiles[1310]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 30 00:00:55.730136 systemd-tmpfiles[1310]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 30 00:00:55.731360 systemd-tmpfiles[1310]: ACLs are not supported, ignoring. Oct 30 00:00:55.731455 systemd-tmpfiles[1310]: ACLs are not supported, ignoring. Oct 30 00:00:55.739723 systemd-tmpfiles[1310]: Detected autofs mount point /boot during canonicalization of boot. Oct 30 00:00:55.739736 systemd-tmpfiles[1310]: Skipping /boot Oct 30 00:00:55.757217 systemd-tmpfiles[1310]: Detected autofs mount point /boot during canonicalization of boot. Oct 30 00:00:55.757230 systemd-tmpfiles[1310]: Skipping /boot Oct 30 00:00:55.815817 zram_generator::config[1337]: No configuration found. Oct 30 00:00:56.040467 systemd[1]: Reloading finished in 342 ms. Oct 30 00:00:56.059440 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 30 00:00:56.071768 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 30 00:00:56.078650 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 30 00:00:56.082782 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 30 00:00:56.089109 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 30 00:00:56.093728 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 30 00:00:56.099822 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 30 00:00:56.103694 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 30 00:00:56.111255 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 00:00:56.112845 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 30 00:00:56.114833 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 30 00:00:56.118843 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 30 00:00:56.125585 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 30 00:00:56.126183 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 30 00:00:56.126295 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 30 00:00:56.126401 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 00:00:56.131843 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 00:00:56.132846 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 30 00:00:56.133056 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 30 00:00:56.133142 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 30 00:00:56.133229 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 00:00:56.144378 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 30 00:00:56.150932 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 00:00:56.151197 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 30 00:00:56.159956 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 30 00:00:56.161756 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 30 00:00:56.161894 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 30 00:00:56.162028 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 00:00:56.166574 systemd[1]: Finished ensure-sysext.service. Oct 30 00:00:56.177024 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 30 00:00:56.190958 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 30 00:00:56.195953 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 30 00:00:56.197911 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 30 00:00:56.205943 systemd-udevd[1386]: Using default interface naming scheme 'v255'. Oct 30 00:00:56.225299 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 30 00:00:56.225630 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 30 00:00:56.240031 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 30 00:00:56.240309 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 30 00:00:56.249622 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 30 00:00:56.249841 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 30 00:00:56.250760 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 30 00:00:56.251635 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 30 00:00:56.252372 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 30 00:00:56.262706 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 30 00:00:56.264708 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 30 00:00:56.264783 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 30 00:00:56.281584 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 30 00:00:56.287923 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 30 00:00:56.330455 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 30 00:00:56.345874 augenrules[1447]: No rules Oct 30 00:00:56.348306 systemd[1]: audit-rules.service: Deactivated successfully. Oct 30 00:00:56.348522 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 30 00:00:56.353837 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 30 00:00:56.458460 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Oct 30 00:00:56.462783 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Oct 30 00:00:56.463847 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 00:00:56.464773 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 30 00:00:56.468806 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 30 00:00:56.478852 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 30 00:00:56.483847 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 30 00:00:56.485748 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 30 00:00:56.485807 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 30 00:00:56.485852 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 30 00:00:56.485875 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 00:00:56.491420 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 30 00:00:56.530533 kernel: ISO 9660 Extensions: RRIP_1991A Oct 30 00:00:56.534763 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 30 00:00:56.536224 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 30 00:00:56.547350 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Oct 30 00:00:56.549370 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 30 00:00:56.550739 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 30 00:00:56.552001 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 30 00:00:56.553037 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 30 00:00:56.565653 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 30 00:00:56.567293 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 30 00:00:56.568534 kernel: mousedev: PS/2 mouse device common for all mice Oct 30 00:00:56.619617 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 30 00:00:56.635803 kernel: ACPI: button: Power Button [PWRF] Oct 30 00:00:56.682584 systemd-networkd[1423]: lo: Link UP Oct 30 00:00:56.682930 systemd-networkd[1423]: lo: Gained carrier Oct 30 00:00:56.685626 systemd-networkd[1423]: Enumeration completed Oct 30 00:00:56.685863 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 30 00:00:56.688607 systemd-networkd[1423]: eth0: Configuring with /run/systemd/network/10-ca:4c:86:5f:1b:90.network. Oct 30 00:00:56.689471 systemd-networkd[1423]: eth1: Configuring with /run/systemd/network/10-2e:32:cc:8c:92:b1.network. Oct 30 00:00:56.690071 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 30 00:00:56.692393 systemd-networkd[1423]: eth0: Link UP Oct 30 00:00:56.692687 systemd-networkd[1423]: eth0: Gained carrier Oct 30 00:00:56.695821 systemd-networkd[1423]: eth1: Link UP Oct 30 00:00:56.696640 systemd-networkd[1423]: eth1: Gained carrier Oct 30 00:00:56.696986 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 30 00:00:56.729926 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 30 00:00:56.731745 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 30 00:00:56.761129 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 30 00:00:56.763814 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 30 00:00:56.780552 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Oct 30 00:00:56.787378 systemd-resolved[1385]: Positive Trust Anchors: Oct 30 00:00:56.787395 systemd-resolved[1385]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 30 00:00:56.787434 systemd-resolved[1385]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 30 00:00:56.788202 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 30 00:00:56.788930 systemd[1]: Reached target time-set.target - System Time Set. Oct 30 00:00:56.796832 systemd-resolved[1385]: Using system hostname 'ci-4459.1.0-n-705ef66fdc'. Oct 30 00:00:56.801063 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 30 00:00:56.801730 systemd[1]: Reached target network.target - Network. Oct 30 00:00:56.802334 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 30 00:00:56.803613 systemd[1]: Reached target sysinit.target - System Initialization. Oct 30 00:00:56.804177 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 30 00:00:56.804679 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 30 00:00:56.805349 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Oct 30 00:00:56.806772 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 30 00:00:56.807313 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 30 00:00:56.807805 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 30 00:00:56.808531 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 30 00:00:56.808565 systemd[1]: Reached target paths.target - Path Units. Oct 30 00:00:56.809599 systemd[1]: Reached target timers.target - Timer Units. Oct 30 00:00:56.811014 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 30 00:00:56.815578 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 30 00:00:56.814228 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 30 00:00:56.823538 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Oct 30 00:00:56.820107 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 30 00:00:56.820849 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 30 00:00:56.821402 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 30 00:00:56.825535 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Oct 30 00:00:56.830636 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 30 00:00:56.836581 kernel: Console: switching to colour dummy device 80x25 Oct 30 00:00:56.836682 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Oct 30 00:00:56.836700 kernel: [drm] features: -context_init Oct 30 00:00:56.834893 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 30 00:00:56.835816 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 30 00:00:56.837754 systemd[1]: Reached target sockets.target - Socket Units. Oct 30 00:00:56.837842 systemd[1]: Reached target basic.target - Basic System. Oct 30 00:00:56.837951 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 30 00:00:56.837974 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 30 00:00:56.840555 kernel: [drm] number of scanouts: 1 Oct 30 00:00:56.840625 kernel: [drm] number of cap sets: 0 Oct 30 00:00:56.840786 systemd[1]: Starting containerd.service - containerd container runtime... Oct 30 00:00:56.843757 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 30 00:00:56.846797 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 30 00:00:56.848826 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 30 00:00:58.015951 systemd-timesyncd[1400]: Contacted time server 12.205.28.193:123 (0.flatcar.pool.ntp.org). Oct 30 00:00:58.016010 systemd-timesyncd[1400]: Initial clock synchronization to Thu 2025-10-30 00:00:58.015827 UTC. Oct 30 00:00:58.017234 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 30 00:00:58.018134 systemd-resolved[1385]: Clock change detected. Flushing caches. Oct 30 00:00:58.019848 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 30 00:00:58.019964 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 30 00:00:58.023854 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Oct 30 00:00:58.034133 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Oct 30 00:00:58.032550 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 30 00:00:58.036278 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 30 00:00:58.045414 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 30 00:00:58.048591 oslogin_cache_refresh[1511]: Refreshing passwd entry cache Oct 30 00:00:58.049603 google_oslogin_nss_cache[1511]: oslogin_cache_refresh[1511]: Refreshing passwd entry cache Oct 30 00:00:58.049736 jq[1509]: false Oct 30 00:00:58.051354 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 30 00:00:58.053561 oslogin_cache_refresh[1511]: Failure getting users, quitting Oct 30 00:00:58.060028 google_oslogin_nss_cache[1511]: oslogin_cache_refresh[1511]: Failure getting users, quitting Oct 30 00:00:58.060028 google_oslogin_nss_cache[1511]: oslogin_cache_refresh[1511]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 30 00:00:58.060028 google_oslogin_nss_cache[1511]: oslogin_cache_refresh[1511]: Refreshing group entry cache Oct 30 00:00:58.060028 google_oslogin_nss_cache[1511]: oslogin_cache_refresh[1511]: Failure getting groups, quitting Oct 30 00:00:58.060028 google_oslogin_nss_cache[1511]: oslogin_cache_refresh[1511]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 30 00:00:58.053584 oslogin_cache_refresh[1511]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 30 00:00:58.053650 oslogin_cache_refresh[1511]: Refreshing group entry cache Oct 30 00:00:58.054361 oslogin_cache_refresh[1511]: Failure getting groups, quitting Oct 30 00:00:58.054371 oslogin_cache_refresh[1511]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 30 00:00:58.070184 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Oct 30 00:00:58.065450 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 30 00:00:58.077665 kernel: Console: switching to colour frame buffer device 128x48 Oct 30 00:00:58.083349 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Oct 30 00:00:58.088491 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 30 00:00:58.096024 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 30 00:00:58.102337 systemd[1]: Starting update-engine.service - Update Engine... Oct 30 00:00:58.113270 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 30 00:00:58.121860 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 30 00:00:58.126568 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 30 00:00:58.134448 jq[1526]: true Oct 30 00:00:58.130373 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 30 00:00:58.130928 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Oct 30 00:00:58.131303 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Oct 30 00:00:58.131823 systemd[1]: motdgen.service: Deactivated successfully. Oct 30 00:00:58.132072 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 30 00:00:58.156574 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 30 00:00:58.164156 coreos-metadata[1506]: Oct 30 00:00:58.156 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Oct 30 00:00:58.158432 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 30 00:00:58.164781 jq[1529]: true Oct 30 00:00:58.182277 coreos-metadata[1506]: Oct 30 00:00:58.170 INFO Fetch successful Oct 30 00:00:58.206556 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 30 00:00:58.206267 dbus-daemon[1507]: [system] SELinux support is enabled Oct 30 00:00:58.213691 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 30 00:00:58.213738 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 30 00:00:58.216683 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 30 00:00:58.216820 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Oct 30 00:00:58.216852 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 30 00:00:58.224209 update_engine[1525]: I20251030 00:00:58.220335 1525 main.cc:92] Flatcar Update Engine starting Oct 30 00:00:58.224209 update_engine[1525]: I20251030 00:00:58.222811 1525 update_check_scheduler.cc:74] Next update check in 7m41s Oct 30 00:00:58.223177 systemd[1]: Started update-engine.service - Update Engine. Oct 30 00:00:58.224735 tar[1528]: linux-amd64/LICENSE Oct 30 00:00:58.224735 tar[1528]: linux-amd64/helm Oct 30 00:00:58.255666 extend-filesystems[1510]: Found /dev/vda6 Oct 30 00:00:58.261211 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 30 00:00:58.279109 extend-filesystems[1510]: Found /dev/vda9 Oct 30 00:00:58.300630 (ntainerd)[1550]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 30 00:00:58.330629 extend-filesystems[1510]: Checking size of /dev/vda9 Oct 30 00:00:58.350134 bash[1568]: Updated "/home/core/.ssh/authorized_keys" Oct 30 00:00:58.353060 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 30 00:00:58.359165 systemd[1]: Starting sshkeys.service... Oct 30 00:00:58.367204 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 00:00:58.421649 extend-filesystems[1510]: Resized partition /dev/vda9 Oct 30 00:00:58.427000 extend-filesystems[1578]: resize2fs 1.47.3 (8-Jul-2025) Oct 30 00:00:58.433787 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 30 00:00:58.437351 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 30 00:00:58.467926 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Oct 30 00:00:58.476625 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Oct 30 00:00:58.498648 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Oct 30 00:00:58.722114 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Oct 30 00:00:58.750172 extend-filesystems[1578]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 30 00:00:58.750172 extend-filesystems[1578]: old_desc_blocks = 1, new_desc_blocks = 8 Oct 30 00:00:58.750172 extend-filesystems[1578]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Oct 30 00:00:58.770061 extend-filesystems[1510]: Resized filesystem in /dev/vda9 Oct 30 00:00:58.751519 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 30 00:00:58.753414 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 30 00:00:58.784440 coreos-metadata[1580]: Oct 30 00:00:58.784 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Oct 30 00:00:58.795911 containerd[1550]: time="2025-10-30T00:00:58Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 30 00:00:58.806419 containerd[1550]: time="2025-10-30T00:00:58.800817225Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 30 00:00:58.808099 coreos-metadata[1580]: Oct 30 00:00:58.807 INFO Fetch successful Oct 30 00:00:58.816338 kernel: EDAC MC: Ver: 3.0.0 Oct 30 00:00:58.823461 unknown[1580]: wrote ssh authorized keys file for user: core Oct 30 00:00:58.849785 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 00:00:58.880697 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 30 00:00:58.880926 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 00:00:58.884106 containerd[1550]: time="2025-10-30T00:00:58.884020868Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="18.659µs" Oct 30 00:00:58.886185 containerd[1550]: time="2025-10-30T00:00:58.886134979Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 30 00:00:58.886274 containerd[1550]: time="2025-10-30T00:00:58.886190419Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 30 00:00:58.886344 containerd[1550]: time="2025-10-30T00:00:58.886329716Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 30 00:00:58.886368 containerd[1550]: time="2025-10-30T00:00:58.886347281Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 30 00:00:58.886391 containerd[1550]: time="2025-10-30T00:00:58.886373387Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 30 00:00:58.886478 containerd[1550]: time="2025-10-30T00:00:58.886457417Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 30 00:00:58.886512 containerd[1550]: time="2025-10-30T00:00:58.886481901Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 30 00:00:58.894020 update-ssh-keys[1598]: Updated "/home/core/.ssh/authorized_keys" Oct 30 00:00:58.902918 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 00:00:58.905362 containerd[1550]: time="2025-10-30T00:00:58.905312403Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 30 00:00:58.905362 containerd[1550]: time="2025-10-30T00:00:58.905354318Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 30 00:00:58.905513 containerd[1550]: time="2025-10-30T00:00:58.905376985Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 30 00:00:58.905513 containerd[1550]: time="2025-10-30T00:00:58.905386104Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 30 00:00:58.905513 containerd[1550]: time="2025-10-30T00:00:58.905497960Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 30 00:00:58.905735 containerd[1550]: time="2025-10-30T00:00:58.905709004Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 30 00:00:58.905784 containerd[1550]: time="2025-10-30T00:00:58.905745457Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 30 00:00:58.905784 containerd[1550]: time="2025-10-30T00:00:58.905755146Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 30 00:00:58.905843 containerd[1550]: time="2025-10-30T00:00:58.905797348Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 30 00:00:58.908281 containerd[1550]: time="2025-10-30T00:00:58.908223645Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 30 00:00:58.908394 containerd[1550]: time="2025-10-30T00:00:58.908353357Z" level=info msg="metadata content store policy set" policy=shared Oct 30 00:00:58.909105 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 00:00:58.916195 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Oct 30 00:00:58.918137 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Oct 30 00:00:58.923780 containerd[1550]: time="2025-10-30T00:00:58.922397690Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 30 00:00:58.923780 containerd[1550]: time="2025-10-30T00:00:58.922485890Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 30 00:00:58.923780 containerd[1550]: time="2025-10-30T00:00:58.922502306Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 30 00:00:58.923780 containerd[1550]: time="2025-10-30T00:00:58.922516246Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 30 00:00:58.923780 containerd[1550]: time="2025-10-30T00:00:58.922562699Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 30 00:00:58.923780 containerd[1550]: time="2025-10-30T00:00:58.922573812Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 30 00:00:58.923780 containerd[1550]: time="2025-10-30T00:00:58.922588634Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 30 00:00:58.923780 containerd[1550]: time="2025-10-30T00:00:58.922602720Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 30 00:00:58.923780 containerd[1550]: time="2025-10-30T00:00:58.922614603Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 30 00:00:58.923780 containerd[1550]: time="2025-10-30T00:00:58.922624445Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 30 00:00:58.923780 containerd[1550]: time="2025-10-30T00:00:58.922633703Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 30 00:00:58.923780 containerd[1550]: time="2025-10-30T00:00:58.922654495Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 30 00:00:58.923780 containerd[1550]: time="2025-10-30T00:00:58.922814046Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 30 00:00:58.923780 containerd[1550]: time="2025-10-30T00:00:58.922834526Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 30 00:00:58.922918 systemd[1]: Finished sshkeys.service. Oct 30 00:00:58.928405 containerd[1550]: time="2025-10-30T00:00:58.922849759Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 30 00:00:58.928405 containerd[1550]: time="2025-10-30T00:00:58.922864681Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 30 00:00:58.928405 containerd[1550]: time="2025-10-30T00:00:58.922876949Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 30 00:00:58.928405 containerd[1550]: time="2025-10-30T00:00:58.922901833Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 30 00:00:58.928405 containerd[1550]: time="2025-10-30T00:00:58.922912967Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 30 00:00:58.928405 containerd[1550]: time="2025-10-30T00:00:58.922923779Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 30 00:00:58.928405 containerd[1550]: time="2025-10-30T00:00:58.922935687Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 30 00:00:58.928405 containerd[1550]: time="2025-10-30T00:00:58.922945792Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 30 00:00:58.928405 containerd[1550]: time="2025-10-30T00:00:58.922955996Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 30 00:00:58.928405 containerd[1550]: time="2025-10-30T00:00:58.923029879Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 30 00:00:58.928405 containerd[1550]: time="2025-10-30T00:00:58.923054750Z" level=info msg="Start snapshots syncer" Oct 30 00:00:58.928405 containerd[1550]: time="2025-10-30T00:00:58.924132152Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 30 00:00:58.938112 containerd[1550]: time="2025-10-30T00:00:58.933520542Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 30 00:00:58.938112 containerd[1550]: time="2025-10-30T00:00:58.933635946Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 30 00:00:58.935615 systemd-networkd[1423]: eth0: Gained IPv6LL Oct 30 00:00:58.938601 containerd[1550]: time="2025-10-30T00:00:58.933751947Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 30 00:00:58.938601 containerd[1550]: time="2025-10-30T00:00:58.933896793Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 30 00:00:58.938601 containerd[1550]: time="2025-10-30T00:00:58.933925287Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 30 00:00:58.938601 containerd[1550]: time="2025-10-30T00:00:58.933941975Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 30 00:00:58.938601 containerd[1550]: time="2025-10-30T00:00:58.933956228Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 30 00:00:58.938601 containerd[1550]: time="2025-10-30T00:00:58.933970810Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 30 00:00:58.938601 containerd[1550]: time="2025-10-30T00:00:58.933984735Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 30 00:00:58.938601 containerd[1550]: time="2025-10-30T00:00:58.933998021Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 30 00:00:58.938601 containerd[1550]: time="2025-10-30T00:00:58.934032318Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 30 00:00:58.938601 containerd[1550]: time="2025-10-30T00:00:58.934046701Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 30 00:00:58.938601 containerd[1550]: time="2025-10-30T00:00:58.934059335Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 30 00:00:58.938601 containerd[1550]: time="2025-10-30T00:00:58.937376368Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 30 00:00:58.942757 containerd[1550]: time="2025-10-30T00:00:58.937423428Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 30 00:00:58.942757 containerd[1550]: time="2025-10-30T00:00:58.942663555Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 30 00:00:58.942757 containerd[1550]: time="2025-10-30T00:00:58.942692900Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 30 00:00:58.942757 containerd[1550]: time="2025-10-30T00:00:58.942702586Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 30 00:00:58.942757 containerd[1550]: time="2025-10-30T00:00:58.942720658Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 30 00:00:58.942757 containerd[1550]: time="2025-10-30T00:00:58.942737645Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 30 00:00:58.942939 containerd[1550]: time="2025-10-30T00:00:58.942760363Z" level=info msg="runtime interface created" Oct 30 00:00:58.942939 containerd[1550]: time="2025-10-30T00:00:58.942769179Z" level=info msg="created NRI interface" Oct 30 00:00:58.942939 containerd[1550]: time="2025-10-30T00:00:58.942781159Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 30 00:00:58.942939 containerd[1550]: time="2025-10-30T00:00:58.942804806Z" level=info msg="Connect containerd service" Oct 30 00:00:58.942939 containerd[1550]: time="2025-10-30T00:00:58.942865346Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 30 00:00:58.945805 containerd[1550]: time="2025-10-30T00:00:58.943728225Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 30 00:00:58.945923 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 30 00:00:58.950400 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 30 00:00:58.950673 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 00:00:58.953321 systemd[1]: Reached target network-online.target - Network is Online. Oct 30 00:00:58.959579 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:00:58.971560 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 30 00:00:58.981440 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 00:00:58.995826 systemd-logind[1516]: New seat seat0. Oct 30 00:00:58.999912 systemd-logind[1516]: Watching system buttons on /dev/input/event2 (Power Button) Oct 30 00:00:58.999939 systemd-logind[1516]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 30 00:00:59.000180 systemd[1]: Started systemd-logind.service - User Login Management. Oct 30 00:00:59.077801 locksmithd[1547]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 30 00:00:59.101210 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 30 00:00:59.164995 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 00:00:59.248622 containerd[1550]: time="2025-10-30T00:00:59.248504763Z" level=info msg="Start subscribing containerd event" Oct 30 00:00:59.248622 containerd[1550]: time="2025-10-30T00:00:59.248571065Z" level=info msg="Start recovering state" Oct 30 00:00:59.248785 containerd[1550]: time="2025-10-30T00:00:59.248696241Z" level=info msg="Start event monitor" Oct 30 00:00:59.248785 containerd[1550]: time="2025-10-30T00:00:59.248714223Z" level=info msg="Start cni network conf syncer for default" Oct 30 00:00:59.248785 containerd[1550]: time="2025-10-30T00:00:59.248723158Z" level=info msg="Start streaming server" Oct 30 00:00:59.248785 containerd[1550]: time="2025-10-30T00:00:59.248733929Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 30 00:00:59.248785 containerd[1550]: time="2025-10-30T00:00:59.248743910Z" level=info msg="runtime interface starting up..." Oct 30 00:00:59.248785 containerd[1550]: time="2025-10-30T00:00:59.248749961Z" level=info msg="starting plugins..." Oct 30 00:00:59.248785 containerd[1550]: time="2025-10-30T00:00:59.248764361Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 30 00:00:59.251396 containerd[1550]: time="2025-10-30T00:00:59.249354062Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 30 00:00:59.251396 containerd[1550]: time="2025-10-30T00:00:59.249427971Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 30 00:00:59.251396 containerd[1550]: time="2025-10-30T00:00:59.249499882Z" level=info msg="containerd successfully booted in 0.454190s" Oct 30 00:00:59.250007 systemd[1]: Started containerd.service - containerd container runtime. Oct 30 00:00:59.385357 sshd_keygen[1549]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 30 00:00:59.461658 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 30 00:00:59.467468 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 30 00:00:59.511617 systemd[1]: issuegen.service: Deactivated successfully. Oct 30 00:00:59.513668 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 30 00:00:59.520055 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 30 00:00:59.556065 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 30 00:00:59.563513 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 30 00:00:59.566197 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 30 00:00:59.571341 systemd[1]: Reached target getty.target - Login Prompts. Oct 30 00:00:59.615349 tar[1528]: linux-amd64/README.md Oct 30 00:00:59.635190 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 30 00:00:59.703556 systemd-networkd[1423]: eth1: Gained IPv6LL Oct 30 00:01:00.404693 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:01:00.411034 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 30 00:01:00.413779 systemd[1]: Startup finished in 3.577s (kernel) + 6.018s (initrd) + 5.652s (userspace) = 15.248s. Oct 30 00:01:00.417877 (kubelet)[1666]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 00:01:01.196454 kubelet[1666]: E1030 00:01:01.196363 1666 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 00:01:01.201408 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 00:01:01.201676 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 00:01:01.202351 systemd[1]: kubelet.service: Consumed 1.326s CPU time, 269.4M memory peak. Oct 30 00:01:03.130733 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 30 00:01:03.132629 systemd[1]: Started sshd@0-143.198.78.203:22-139.178.89.65:35226.service - OpenSSH per-connection server daemon (139.178.89.65:35226). Oct 30 00:01:03.247254 sshd[1678]: Accepted publickey for core from 139.178.89.65 port 35226 ssh2: RSA SHA256:R36h6avakroD4W10ylGeMiic55sH3vtiJobaKN4s5do Oct 30 00:01:03.250681 sshd-session[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:01:03.266846 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 30 00:01:03.269629 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 30 00:01:03.273515 systemd-logind[1516]: New session 1 of user core. Oct 30 00:01:03.300343 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 30 00:01:03.306008 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 30 00:01:03.327771 (systemd)[1683]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 30 00:01:03.332690 systemd-logind[1516]: New session c1 of user core. Oct 30 00:01:03.567403 systemd[1683]: Queued start job for default target default.target. Oct 30 00:01:03.578672 systemd[1683]: Created slice app.slice - User Application Slice. Oct 30 00:01:03.578717 systemd[1683]: Reached target paths.target - Paths. Oct 30 00:01:03.578786 systemd[1683]: Reached target timers.target - Timers. Oct 30 00:01:03.580663 systemd[1683]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 30 00:01:03.599102 systemd[1683]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 30 00:01:03.599328 systemd[1683]: Reached target sockets.target - Sockets. Oct 30 00:01:03.599430 systemd[1683]: Reached target basic.target - Basic System. Oct 30 00:01:03.599501 systemd[1683]: Reached target default.target - Main User Target. Oct 30 00:01:03.599555 systemd[1683]: Startup finished in 257ms. Oct 30 00:01:03.599767 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 30 00:01:03.611437 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 30 00:01:03.688616 systemd[1]: Started sshd@1-143.198.78.203:22-139.178.89.65:35230.service - OpenSSH per-connection server daemon (139.178.89.65:35230). Oct 30 00:01:03.770483 sshd[1694]: Accepted publickey for core from 139.178.89.65 port 35230 ssh2: RSA SHA256:R36h6avakroD4W10ylGeMiic55sH3vtiJobaKN4s5do Oct 30 00:01:03.773205 sshd-session[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:01:03.782534 systemd-logind[1516]: New session 2 of user core. Oct 30 00:01:03.792428 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 30 00:01:03.856647 sshd[1697]: Connection closed by 139.178.89.65 port 35230 Oct 30 00:01:03.858431 sshd-session[1694]: pam_unix(sshd:session): session closed for user core Oct 30 00:01:03.868826 systemd[1]: sshd@1-143.198.78.203:22-139.178.89.65:35230.service: Deactivated successfully. Oct 30 00:01:03.871231 systemd[1]: session-2.scope: Deactivated successfully. Oct 30 00:01:03.873639 systemd-logind[1516]: Session 2 logged out. Waiting for processes to exit. Oct 30 00:01:03.877906 systemd[1]: Started sshd@2-143.198.78.203:22-139.178.89.65:35244.service - OpenSSH per-connection server daemon (139.178.89.65:35244). Oct 30 00:01:03.879133 systemd-logind[1516]: Removed session 2. Oct 30 00:01:03.952566 sshd[1703]: Accepted publickey for core from 139.178.89.65 port 35244 ssh2: RSA SHA256:R36h6avakroD4W10ylGeMiic55sH3vtiJobaKN4s5do Oct 30 00:01:03.955454 sshd-session[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:01:03.962890 systemd-logind[1516]: New session 3 of user core. Oct 30 00:01:03.982463 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 30 00:01:04.042498 sshd[1706]: Connection closed by 139.178.89.65 port 35244 Oct 30 00:01:04.042351 sshd-session[1703]: pam_unix(sshd:session): session closed for user core Oct 30 00:01:04.059391 systemd[1]: sshd@2-143.198.78.203:22-139.178.89.65:35244.service: Deactivated successfully. Oct 30 00:01:04.062544 systemd[1]: session-3.scope: Deactivated successfully. Oct 30 00:01:04.063989 systemd-logind[1516]: Session 3 logged out. Waiting for processes to exit. Oct 30 00:01:04.069208 systemd[1]: Started sshd@3-143.198.78.203:22-139.178.89.65:35256.service - OpenSSH per-connection server daemon (139.178.89.65:35256). Oct 30 00:01:04.070241 systemd-logind[1516]: Removed session 3. Oct 30 00:01:04.154524 sshd[1712]: Accepted publickey for core from 139.178.89.65 port 35256 ssh2: RSA SHA256:R36h6avakroD4W10ylGeMiic55sH3vtiJobaKN4s5do Oct 30 00:01:04.157522 sshd-session[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:01:04.164160 systemd-logind[1516]: New session 4 of user core. Oct 30 00:01:04.168350 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 30 00:01:04.230551 sshd[1715]: Connection closed by 139.178.89.65 port 35256 Oct 30 00:01:04.231167 sshd-session[1712]: pam_unix(sshd:session): session closed for user core Oct 30 00:01:04.245295 systemd[1]: sshd@3-143.198.78.203:22-139.178.89.65:35256.service: Deactivated successfully. Oct 30 00:01:04.248260 systemd[1]: session-4.scope: Deactivated successfully. Oct 30 00:01:04.250827 systemd-logind[1516]: Session 4 logged out. Waiting for processes to exit. Oct 30 00:01:04.255417 systemd[1]: Started sshd@4-143.198.78.203:22-139.178.89.65:35270.service - OpenSSH per-connection server daemon (139.178.89.65:35270). Oct 30 00:01:04.256294 systemd-logind[1516]: Removed session 4. Oct 30 00:01:04.332888 sshd[1721]: Accepted publickey for core from 139.178.89.65 port 35270 ssh2: RSA SHA256:R36h6avakroD4W10ylGeMiic55sH3vtiJobaKN4s5do Oct 30 00:01:04.334727 sshd-session[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:01:04.342164 systemd-logind[1516]: New session 5 of user core. Oct 30 00:01:04.347402 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 30 00:01:04.422363 sudo[1725]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 30 00:01:04.422848 sudo[1725]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 00:01:04.440649 sudo[1725]: pam_unix(sudo:session): session closed for user root Oct 30 00:01:04.446124 sshd[1724]: Connection closed by 139.178.89.65 port 35270 Oct 30 00:01:04.445704 sshd-session[1721]: pam_unix(sshd:session): session closed for user core Oct 30 00:01:04.460402 systemd[1]: sshd@4-143.198.78.203:22-139.178.89.65:35270.service: Deactivated successfully. Oct 30 00:01:04.463478 systemd[1]: session-5.scope: Deactivated successfully. Oct 30 00:01:04.465226 systemd-logind[1516]: Session 5 logged out. Waiting for processes to exit. Oct 30 00:01:04.467845 systemd-logind[1516]: Removed session 5. Oct 30 00:01:04.470501 systemd[1]: Started sshd@5-143.198.78.203:22-139.178.89.65:35278.service - OpenSSH per-connection server daemon (139.178.89.65:35278). Oct 30 00:01:04.539162 sshd[1731]: Accepted publickey for core from 139.178.89.65 port 35278 ssh2: RSA SHA256:R36h6avakroD4W10ylGeMiic55sH3vtiJobaKN4s5do Oct 30 00:01:04.541980 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:01:04.550117 systemd-logind[1516]: New session 6 of user core. Oct 30 00:01:04.557368 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 30 00:01:04.620525 sudo[1736]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 30 00:01:04.621581 sudo[1736]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 00:01:04.628429 sudo[1736]: pam_unix(sudo:session): session closed for user root Oct 30 00:01:04.635720 sudo[1735]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 30 00:01:04.636028 sudo[1735]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 00:01:04.651001 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 30 00:01:04.700540 augenrules[1758]: No rules Oct 30 00:01:04.702805 systemd[1]: audit-rules.service: Deactivated successfully. Oct 30 00:01:04.703191 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 30 00:01:04.705144 sudo[1735]: pam_unix(sudo:session): session closed for user root Oct 30 00:01:04.708638 sshd[1734]: Connection closed by 139.178.89.65 port 35278 Oct 30 00:01:04.709779 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Oct 30 00:01:04.727723 systemd[1]: sshd@5-143.198.78.203:22-139.178.89.65:35278.service: Deactivated successfully. Oct 30 00:01:04.730881 systemd[1]: session-6.scope: Deactivated successfully. Oct 30 00:01:04.732261 systemd-logind[1516]: Session 6 logged out. Waiting for processes to exit. Oct 30 00:01:04.738042 systemd[1]: Started sshd@6-143.198.78.203:22-139.178.89.65:35288.service - OpenSSH per-connection server daemon (139.178.89.65:35288). Oct 30 00:01:04.739359 systemd-logind[1516]: Removed session 6. Oct 30 00:01:04.806904 sshd[1767]: Accepted publickey for core from 139.178.89.65 port 35288 ssh2: RSA SHA256:R36h6avakroD4W10ylGeMiic55sH3vtiJobaKN4s5do Oct 30 00:01:04.808827 sshd-session[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:01:04.815799 systemd-logind[1516]: New session 7 of user core. Oct 30 00:01:04.822424 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 30 00:01:04.884519 sudo[1771]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 30 00:01:04.885331 sudo[1771]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 00:01:05.390340 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 30 00:01:05.414099 (dockerd)[1788]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 30 00:01:05.786580 dockerd[1788]: time="2025-10-30T00:01:05.786417652Z" level=info msg="Starting up" Oct 30 00:01:05.790725 dockerd[1788]: time="2025-10-30T00:01:05.790682869Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 30 00:01:05.810664 dockerd[1788]: time="2025-10-30T00:01:05.810585886Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 30 00:01:05.828727 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport416125711-merged.mount: Deactivated successfully. Oct 30 00:01:05.853823 dockerd[1788]: time="2025-10-30T00:01:05.853554540Z" level=info msg="Loading containers: start." Oct 30 00:01:05.865172 kernel: Initializing XFRM netlink socket Oct 30 00:01:06.181507 systemd-networkd[1423]: docker0: Link UP Oct 30 00:01:06.186244 dockerd[1788]: time="2025-10-30T00:01:06.186136270Z" level=info msg="Loading containers: done." Oct 30 00:01:06.203626 dockerd[1788]: time="2025-10-30T00:01:06.202953520Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 30 00:01:06.203626 dockerd[1788]: time="2025-10-30T00:01:06.203059729Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 30 00:01:06.203626 dockerd[1788]: time="2025-10-30T00:01:06.203208468Z" level=info msg="Initializing buildkit" Oct 30 00:01:06.204737 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1998568995-merged.mount: Deactivated successfully. Oct 30 00:01:06.227847 dockerd[1788]: time="2025-10-30T00:01:06.227798161Z" level=info msg="Completed buildkit initialization" Oct 30 00:01:06.236577 dockerd[1788]: time="2025-10-30T00:01:06.236526033Z" level=info msg="Daemon has completed initialization" Oct 30 00:01:06.236818 dockerd[1788]: time="2025-10-30T00:01:06.236780641Z" level=info msg="API listen on /run/docker.sock" Oct 30 00:01:06.238270 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 30 00:01:07.172569 containerd[1550]: time="2025-10-30T00:01:07.172444989Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Oct 30 00:01:07.835717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount312240145.mount: Deactivated successfully. Oct 30 00:01:09.144265 containerd[1550]: time="2025-10-30T00:01:09.143107964Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:01:09.145638 containerd[1550]: time="2025-10-30T00:01:09.145597834Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Oct 30 00:01:09.146253 containerd[1550]: time="2025-10-30T00:01:09.146220845Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:01:09.149308 containerd[1550]: time="2025-10-30T00:01:09.149271499Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:01:09.151477 containerd[1550]: time="2025-10-30T00:01:09.151427935Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 1.978930001s" Oct 30 00:01:09.151661 containerd[1550]: time="2025-10-30T00:01:09.151639047Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Oct 30 00:01:09.152436 containerd[1550]: time="2025-10-30T00:01:09.152407794Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Oct 30 00:01:10.852007 containerd[1550]: time="2025-10-30T00:01:10.851916035Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:01:10.853642 containerd[1550]: time="2025-10-30T00:01:10.853572243Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Oct 30 00:01:10.856105 containerd[1550]: time="2025-10-30T00:01:10.855120242Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:01:10.857658 containerd[1550]: time="2025-10-30T00:01:10.857611187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:01:10.858856 containerd[1550]: time="2025-10-30T00:01:10.858814605Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.706274156s" Oct 30 00:01:10.858856 containerd[1550]: time="2025-10-30T00:01:10.858854087Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Oct 30 00:01:10.859572 containerd[1550]: time="2025-10-30T00:01:10.859427578Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Oct 30 00:01:11.452050 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 30 00:01:11.454971 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:01:11.656580 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:01:11.673017 (kubelet)[2078]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 00:01:11.747665 kubelet[2078]: E1030 00:01:11.747527 2078 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 00:01:11.753042 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 00:01:11.753231 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 00:01:11.753547 systemd[1]: kubelet.service: Consumed 220ms CPU time, 111M memory peak. Oct 30 00:01:12.171476 containerd[1550]: time="2025-10-30T00:01:12.171122546Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:01:12.172110 containerd[1550]: time="2025-10-30T00:01:12.172055165Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Oct 30 00:01:12.173689 containerd[1550]: time="2025-10-30T00:01:12.173646880Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:01:12.180438 containerd[1550]: time="2025-10-30T00:01:12.180382078Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:01:12.181328 containerd[1550]: time="2025-10-30T00:01:12.181279140Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.32182159s" Oct 30 00:01:12.181328 containerd[1550]: time="2025-10-30T00:01:12.181324310Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Oct 30 00:01:12.182050 containerd[1550]: time="2025-10-30T00:01:12.182012472Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Oct 30 00:01:13.238521 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount918236245.mount: Deactivated successfully. Oct 30 00:01:13.764003 containerd[1550]: time="2025-10-30T00:01:13.763298940Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:01:13.764003 containerd[1550]: time="2025-10-30T00:01:13.763953528Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Oct 30 00:01:13.764594 containerd[1550]: time="2025-10-30T00:01:13.764561350Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:01:13.766055 containerd[1550]: time="2025-10-30T00:01:13.766015261Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:01:13.766829 containerd[1550]: time="2025-10-30T00:01:13.766796300Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 1.584589516s" Oct 30 00:01:13.766960 containerd[1550]: time="2025-10-30T00:01:13.766944332Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Oct 30 00:01:13.767583 containerd[1550]: time="2025-10-30T00:01:13.767562597Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Oct 30 00:01:13.769224 systemd-resolved[1385]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Oct 30 00:01:14.273014 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1660628262.mount: Deactivated successfully. Oct 30 00:01:15.290984 containerd[1550]: time="2025-10-30T00:01:15.290920239Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:01:15.292184 containerd[1550]: time="2025-10-30T00:01:15.292141667Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Oct 30 00:01:15.293829 containerd[1550]: time="2025-10-30T00:01:15.292700189Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:01:15.296220 containerd[1550]: time="2025-10-30T00:01:15.296175794Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:01:15.297768 containerd[1550]: time="2025-10-30T00:01:15.297725683Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.529882288s" Oct 30 00:01:15.297768 containerd[1550]: time="2025-10-30T00:01:15.297766129Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Oct 30 00:01:15.298388 containerd[1550]: time="2025-10-30T00:01:15.298355842Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 30 00:01:15.762649 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3479933027.mount: Deactivated successfully. Oct 30 00:01:15.768097 containerd[1550]: time="2025-10-30T00:01:15.768023245Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 00:01:15.768811 containerd[1550]: time="2025-10-30T00:01:15.768776299Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Oct 30 00:01:15.769737 containerd[1550]: time="2025-10-30T00:01:15.769497557Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 00:01:15.771479 containerd[1550]: time="2025-10-30T00:01:15.771441511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 00:01:15.771867 containerd[1550]: time="2025-10-30T00:01:15.771834313Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 472.694046ms" Oct 30 00:01:15.771923 containerd[1550]: time="2025-10-30T00:01:15.771874721Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Oct 30 00:01:15.773092 containerd[1550]: time="2025-10-30T00:01:15.772916642Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Oct 30 00:01:16.256669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1926236971.mount: Deactivated successfully. Oct 30 00:01:16.855359 systemd-resolved[1385]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Oct 30 00:01:18.066217 containerd[1550]: time="2025-10-30T00:01:18.066137530Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:01:18.067037 containerd[1550]: time="2025-10-30T00:01:18.066995634Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Oct 30 00:01:18.067987 containerd[1550]: time="2025-10-30T00:01:18.067937824Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:01:18.071167 containerd[1550]: time="2025-10-30T00:01:18.070730980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:01:18.071947 containerd[1550]: time="2025-10-30T00:01:18.071900479Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.298947106s" Oct 30 00:01:18.071947 containerd[1550]: time="2025-10-30T00:01:18.071947163Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Oct 30 00:01:20.807790 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:01:20.808136 systemd[1]: kubelet.service: Consumed 220ms CPU time, 111M memory peak. Oct 30 00:01:20.811147 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:01:20.848366 systemd[1]: Reload requested from client PID 2235 ('systemctl') (unit session-7.scope)... Oct 30 00:01:20.848386 systemd[1]: Reloading... Oct 30 00:01:20.992109 zram_generator::config[2280]: No configuration found. Oct 30 00:01:21.268251 systemd[1]: Reloading finished in 419 ms. Oct 30 00:01:21.327348 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 30 00:01:21.327449 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 30 00:01:21.327757 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:01:21.327825 systemd[1]: kubelet.service: Consumed 130ms CPU time, 97.9M memory peak. Oct 30 00:01:21.329766 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:01:21.519183 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:01:21.533715 (kubelet)[2333]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 30 00:01:21.585216 kubelet[2333]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 00:01:21.585569 kubelet[2333]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 30 00:01:21.585610 kubelet[2333]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 00:01:21.585741 kubelet[2333]: I1030 00:01:21.585704 2333 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 30 00:01:21.806638 kubelet[2333]: I1030 00:01:21.806223 2333 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 30 00:01:21.806928 kubelet[2333]: I1030 00:01:21.806793 2333 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 30 00:01:21.807521 kubelet[2333]: I1030 00:01:21.807494 2333 server.go:956] "Client rotation is on, will bootstrap in background" Oct 30 00:01:21.850009 kubelet[2333]: I1030 00:01:21.849958 2333 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 30 00:01:21.853545 kubelet[2333]: E1030 00:01:21.853163 2333 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://143.198.78.203:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 143.198.78.203:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 30 00:01:21.871777 kubelet[2333]: I1030 00:01:21.871732 2333 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 30 00:01:21.878966 kubelet[2333]: I1030 00:01:21.878921 2333 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 30 00:01:21.881796 kubelet[2333]: I1030 00:01:21.881673 2333 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 30 00:01:21.886117 kubelet[2333]: I1030 00:01:21.881762 2333 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.1.0-n-705ef66fdc","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 30 00:01:21.886117 kubelet[2333]: I1030 00:01:21.886085 2333 topology_manager.go:138] "Creating topology manager with none policy" Oct 30 00:01:21.886117 kubelet[2333]: I1030 00:01:21.886113 2333 container_manager_linux.go:303] "Creating device plugin manager" Oct 30 00:01:21.887443 kubelet[2333]: I1030 00:01:21.887376 2333 state_mem.go:36] "Initialized new in-memory state store" Oct 30 00:01:21.890506 kubelet[2333]: I1030 00:01:21.890443 2333 kubelet.go:480] "Attempting to sync node with API server" Oct 30 00:01:21.890506 kubelet[2333]: I1030 00:01:21.890497 2333 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 30 00:01:21.890919 kubelet[2333]: I1030 00:01:21.890544 2333 kubelet.go:386] "Adding apiserver pod source" Oct 30 00:01:21.890919 kubelet[2333]: I1030 00:01:21.890578 2333 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 30 00:01:21.900509 kubelet[2333]: E1030 00:01:21.900462 2333 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://143.198.78.203:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.1.0-n-705ef66fdc&limit=500&resourceVersion=0\": dial tcp 143.198.78.203:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 30 00:01:21.902678 kubelet[2333]: I1030 00:01:21.902629 2333 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 30 00:01:21.903458 kubelet[2333]: I1030 00:01:21.903430 2333 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 30 00:01:21.905429 kubelet[2333]: W1030 00:01:21.904239 2333 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 30 00:01:21.908090 kubelet[2333]: E1030 00:01:21.908012 2333 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://143.198.78.203:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 143.198.78.203:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 30 00:01:21.909669 kubelet[2333]: I1030 00:01:21.909643 2333 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 30 00:01:21.909872 kubelet[2333]: I1030 00:01:21.909862 2333 server.go:1289] "Started kubelet" Oct 30 00:01:21.913694 kubelet[2333]: I1030 00:01:21.913649 2333 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 30 00:01:21.916742 kubelet[2333]: E1030 00:01:21.915105 2333 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://143.198.78.203:6443/api/v1/namespaces/default/events\": dial tcp 143.198.78.203:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.1.0-n-705ef66fdc.18731bd53dbd60a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.1.0-n-705ef66fdc,UID:ci-4459.1.0-n-705ef66fdc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.1.0-n-705ef66fdc,},FirstTimestamp:2025-10-30 00:01:21.909801121 +0000 UTC m=+0.369592854,LastTimestamp:2025-10-30 00:01:21.909801121 +0000 UTC m=+0.369592854,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.1.0-n-705ef66fdc,}" Oct 30 00:01:21.917020 kubelet[2333]: I1030 00:01:21.916866 2333 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 30 00:01:21.919135 kubelet[2333]: I1030 00:01:21.919041 2333 server.go:317] "Adding debug handlers to kubelet server" Oct 30 00:01:21.922412 kubelet[2333]: I1030 00:01:21.922363 2333 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 30 00:01:21.927151 kubelet[2333]: E1030 00:01:21.924113 2333 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-705ef66fdc\" not found" Oct 30 00:01:21.927151 kubelet[2333]: I1030 00:01:21.926951 2333 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 30 00:01:21.927335 kubelet[2333]: I1030 00:01:21.927174 2333 reconciler.go:26] "Reconciler: start to sync state" Oct 30 00:01:21.928505 kubelet[2333]: E1030 00:01:21.928112 2333 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.78.203:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.1.0-n-705ef66fdc?timeout=10s\": dial tcp 143.198.78.203:6443: connect: connection refused" interval="200ms" Oct 30 00:01:21.928618 kubelet[2333]: I1030 00:01:21.928540 2333 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 30 00:01:21.929417 kubelet[2333]: I1030 00:01:21.929394 2333 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 30 00:01:21.929706 kubelet[2333]: I1030 00:01:21.929665 2333 factory.go:223] Registration of the systemd container factory successfully Oct 30 00:01:21.929803 kubelet[2333]: I1030 00:01:21.929776 2333 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 30 00:01:21.930541 kubelet[2333]: I1030 00:01:21.930516 2333 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 30 00:01:21.934156 kubelet[2333]: E1030 00:01:21.932346 2333 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://143.198.78.203:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 143.198.78.203:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 30 00:01:21.936229 kubelet[2333]: I1030 00:01:21.936192 2333 factory.go:223] Registration of the containerd container factory successfully Oct 30 00:01:21.937484 kubelet[2333]: E1030 00:01:21.937424 2333 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 30 00:01:21.960106 kubelet[2333]: I1030 00:01:21.959540 2333 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 30 00:01:21.960106 kubelet[2333]: I1030 00:01:21.959559 2333 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 30 00:01:21.960106 kubelet[2333]: I1030 00:01:21.959578 2333 state_mem.go:36] "Initialized new in-memory state store" Oct 30 00:01:21.962341 kubelet[2333]: I1030 00:01:21.962306 2333 policy_none.go:49] "None policy: Start" Oct 30 00:01:21.962341 kubelet[2333]: I1030 00:01:21.962351 2333 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 30 00:01:21.962484 kubelet[2333]: I1030 00:01:21.962368 2333 state_mem.go:35] "Initializing new in-memory state store" Oct 30 00:01:21.970278 kubelet[2333]: I1030 00:01:21.970224 2333 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 30 00:01:21.974431 kubelet[2333]: I1030 00:01:21.974400 2333 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 30 00:01:21.974708 kubelet[2333]: I1030 00:01:21.974697 2333 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 30 00:01:21.974800 kubelet[2333]: I1030 00:01:21.974791 2333 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 30 00:01:21.974884 kubelet[2333]: I1030 00:01:21.974876 2333 kubelet.go:2436] "Starting kubelet main sync loop" Oct 30 00:01:21.974991 kubelet[2333]: E1030 00:01:21.974975 2333 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 30 00:01:21.980056 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 30 00:01:21.982588 kubelet[2333]: E1030 00:01:21.982547 2333 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://143.198.78.203:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 143.198.78.203:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 30 00:01:21.996009 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 30 00:01:22.001640 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 30 00:01:22.021743 kubelet[2333]: E1030 00:01:22.021697 2333 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 30 00:01:22.023045 kubelet[2333]: I1030 00:01:22.022390 2333 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 30 00:01:22.023045 kubelet[2333]: I1030 00:01:22.022410 2333 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 30 00:01:22.023045 kubelet[2333]: I1030 00:01:22.022794 2333 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 30 00:01:22.027190 kubelet[2333]: E1030 00:01:22.027154 2333 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 30 00:01:22.027346 kubelet[2333]: E1030 00:01:22.027224 2333 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.1.0-n-705ef66fdc\" not found" Oct 30 00:01:22.094537 systemd[1]: Created slice kubepods-burstable-podd17da3af8bb46998a9bdab335a60e0ef.slice - libcontainer container kubepods-burstable-podd17da3af8bb46998a9bdab335a60e0ef.slice. Oct 30 00:01:22.107611 kubelet[2333]: E1030 00:01:22.107551 2333 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-705ef66fdc\" not found" node="ci-4459.1.0-n-705ef66fdc" Oct 30 00:01:22.112285 systemd[1]: Created slice kubepods-burstable-pode507a1ee4bf3b60adee6a9a381491a75.slice - libcontainer container kubepods-burstable-pode507a1ee4bf3b60adee6a9a381491a75.slice. Oct 30 00:01:22.123136 kubelet[2333]: E1030 00:01:22.122473 2333 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-705ef66fdc\" not found" node="ci-4459.1.0-n-705ef66fdc" Oct 30 00:01:22.124451 kubelet[2333]: I1030 00:01:22.124416 2333 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-n-705ef66fdc" Oct 30 00:01:22.125534 kubelet[2333]: E1030 00:01:22.125488 2333 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.198.78.203:6443/api/v1/nodes\": dial tcp 143.198.78.203:6443: connect: connection refused" node="ci-4459.1.0-n-705ef66fdc" Oct 30 00:01:22.128597 kubelet[2333]: I1030 00:01:22.128551 2333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d17da3af8bb46998a9bdab335a60e0ef-kubeconfig\") pod \"kube-scheduler-ci-4459.1.0-n-705ef66fdc\" (UID: \"d17da3af8bb46998a9bdab335a60e0ef\") " pod="kube-system/kube-scheduler-ci-4459.1.0-n-705ef66fdc" Oct 30 00:01:22.128851 kubelet[2333]: I1030 00:01:22.128602 2333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e507a1ee4bf3b60adee6a9a381491a75-ca-certs\") pod \"kube-apiserver-ci-4459.1.0-n-705ef66fdc\" (UID: \"e507a1ee4bf3b60adee6a9a381491a75\") " pod="kube-system/kube-apiserver-ci-4459.1.0-n-705ef66fdc" Oct 30 00:01:22.128851 kubelet[2333]: I1030 00:01:22.128633 2333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e507a1ee4bf3b60adee6a9a381491a75-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.1.0-n-705ef66fdc\" (UID: \"e507a1ee4bf3b60adee6a9a381491a75\") " pod="kube-system/kube-apiserver-ci-4459.1.0-n-705ef66fdc" Oct 30 00:01:22.128851 kubelet[2333]: I1030 00:01:22.128677 2333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fe7f17239baa0e662f050c0cec183e14-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.1.0-n-705ef66fdc\" (UID: \"fe7f17239baa0e662f050c0cec183e14\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-705ef66fdc" Oct 30 00:01:22.128851 kubelet[2333]: I1030 00:01:22.128711 2333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe7f17239baa0e662f050c0cec183e14-kubeconfig\") pod \"kube-controller-manager-ci-4459.1.0-n-705ef66fdc\" (UID: \"fe7f17239baa0e662f050c0cec183e14\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-705ef66fdc" Oct 30 00:01:22.128851 kubelet[2333]: I1030 00:01:22.128733 2333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e507a1ee4bf3b60adee6a9a381491a75-k8s-certs\") pod \"kube-apiserver-ci-4459.1.0-n-705ef66fdc\" (UID: \"e507a1ee4bf3b60adee6a9a381491a75\") " pod="kube-system/kube-apiserver-ci-4459.1.0-n-705ef66fdc" Oct 30 00:01:22.129104 kubelet[2333]: I1030 00:01:22.128802 2333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fe7f17239baa0e662f050c0cec183e14-ca-certs\") pod \"kube-controller-manager-ci-4459.1.0-n-705ef66fdc\" (UID: \"fe7f17239baa0e662f050c0cec183e14\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-705ef66fdc" Oct 30 00:01:22.129104 kubelet[2333]: I1030 00:01:22.128824 2333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fe7f17239baa0e662f050c0cec183e14-k8s-certs\") pod \"kube-controller-manager-ci-4459.1.0-n-705ef66fdc\" (UID: \"fe7f17239baa0e662f050c0cec183e14\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-705ef66fdc" Oct 30 00:01:22.129104 kubelet[2333]: I1030 00:01:22.128852 2333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fe7f17239baa0e662f050c0cec183e14-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.1.0-n-705ef66fdc\" (UID: \"fe7f17239baa0e662f050c0cec183e14\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-705ef66fdc" Oct 30 00:01:22.130442 kubelet[2333]: E1030 00:01:22.129583 2333 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.78.203:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.1.0-n-705ef66fdc?timeout=10s\": dial tcp 143.198.78.203:6443: connect: connection refused" interval="400ms" Oct 30 00:01:22.130236 systemd[1]: Created slice kubepods-burstable-podfe7f17239baa0e662f050c0cec183e14.slice - libcontainer container kubepods-burstable-podfe7f17239baa0e662f050c0cec183e14.slice. Oct 30 00:01:22.133817 kubelet[2333]: E1030 00:01:22.133773 2333 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-705ef66fdc\" not found" node="ci-4459.1.0-n-705ef66fdc" Oct 30 00:01:22.188881 kubelet[2333]: E1030 00:01:22.188599 2333 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://143.198.78.203:6443/api/v1/namespaces/default/events\": dial tcp 143.198.78.203:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.1.0-n-705ef66fdc.18731bd53dbd60a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.1.0-n-705ef66fdc,UID:ci-4459.1.0-n-705ef66fdc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.1.0-n-705ef66fdc,},FirstTimestamp:2025-10-30 00:01:21.909801121 +0000 UTC m=+0.369592854,LastTimestamp:2025-10-30 00:01:21.909801121 +0000 UTC m=+0.369592854,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.1.0-n-705ef66fdc,}" Oct 30 00:01:22.327825 kubelet[2333]: I1030 00:01:22.327782 2333 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-n-705ef66fdc" Oct 30 00:01:22.328424 kubelet[2333]: E1030 00:01:22.328367 2333 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.198.78.203:6443/api/v1/nodes\": dial tcp 143.198.78.203:6443: connect: connection refused" node="ci-4459.1.0-n-705ef66fdc" Oct 30 00:01:22.408925 kubelet[2333]: E1030 00:01:22.408655 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:01:22.413531 containerd[1550]: time="2025-10-30T00:01:22.413443828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.1.0-n-705ef66fdc,Uid:d17da3af8bb46998a9bdab335a60e0ef,Namespace:kube-system,Attempt:0,}" Oct 30 00:01:22.424559 kubelet[2333]: E1030 00:01:22.423887 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:01:22.425543 containerd[1550]: time="2025-10-30T00:01:22.425158858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.1.0-n-705ef66fdc,Uid:e507a1ee4bf3b60adee6a9a381491a75,Namespace:kube-system,Attempt:0,}" Oct 30 00:01:22.437129 kubelet[2333]: E1030 00:01:22.435893 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:01:22.447488 containerd[1550]: time="2025-10-30T00:01:22.447348344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.1.0-n-705ef66fdc,Uid:fe7f17239baa0e662f050c0cec183e14,Namespace:kube-system,Attempt:0,}" Oct 30 00:01:22.533911 kubelet[2333]: E1030 00:01:22.533813 2333 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.78.203:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.1.0-n-705ef66fdc?timeout=10s\": dial tcp 143.198.78.203:6443: connect: connection refused" interval="800ms" Oct 30 00:01:22.597803 containerd[1550]: time="2025-10-30T00:01:22.597373691Z" level=info msg="connecting to shim 82c2549a2b46f5f3a6e150309f8ce1bdad8438d65d574afa13119d281ee7d194" address="unix:///run/containerd/s/4d1c39e46d2aac780f0788a33641da2edbf12d154dc30c6106c1abdeb92e4586" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:01:22.633005 containerd[1550]: time="2025-10-30T00:01:22.632926916Z" level=info msg="connecting to shim fb21ed15cff386b36e1ed63a3a9c66cf8d3f40d061623f9515ebb9bf1233085e" address="unix:///run/containerd/s/8d7611425808e4dc985d0afc30692b13e37c2f361c70fe7c21f45a1eadd860a0" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:01:22.640503 containerd[1550]: time="2025-10-30T00:01:22.640406057Z" level=info msg="connecting to shim bd5e1b1e60b9bda6f7ba1b902e64e53ed86a20edb56eb0f017875b4331c1062f" address="unix:///run/containerd/s/1acc96ccc3d87541c3f242331fda65762a2c8808c1333ddc8a437404cea8b2b6" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:01:22.733150 kubelet[2333]: I1030 00:01:22.732190 2333 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-n-705ef66fdc" Oct 30 00:01:22.733150 kubelet[2333]: E1030 00:01:22.732576 2333 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.198.78.203:6443/api/v1/nodes\": dial tcp 143.198.78.203:6443: connect: connection refused" node="ci-4459.1.0-n-705ef66fdc" Oct 30 00:01:22.771427 systemd[1]: Started cri-containerd-82c2549a2b46f5f3a6e150309f8ce1bdad8438d65d574afa13119d281ee7d194.scope - libcontainer container 82c2549a2b46f5f3a6e150309f8ce1bdad8438d65d574afa13119d281ee7d194. Oct 30 00:01:22.774259 systemd[1]: Started cri-containerd-bd5e1b1e60b9bda6f7ba1b902e64e53ed86a20edb56eb0f017875b4331c1062f.scope - libcontainer container bd5e1b1e60b9bda6f7ba1b902e64e53ed86a20edb56eb0f017875b4331c1062f. Oct 30 00:01:22.777815 systemd[1]: Started cri-containerd-fb21ed15cff386b36e1ed63a3a9c66cf8d3f40d061623f9515ebb9bf1233085e.scope - libcontainer container fb21ed15cff386b36e1ed63a3a9c66cf8d3f40d061623f9515ebb9bf1233085e. Oct 30 00:01:22.897249 containerd[1550]: time="2025-10-30T00:01:22.897124758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.1.0-n-705ef66fdc,Uid:d17da3af8bb46998a9bdab335a60e0ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"82c2549a2b46f5f3a6e150309f8ce1bdad8438d65d574afa13119d281ee7d194\"" Oct 30 00:01:22.899242 kubelet[2333]: E1030 00:01:22.899194 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:01:22.909516 containerd[1550]: time="2025-10-30T00:01:22.909378060Z" level=info msg="CreateContainer within sandbox \"82c2549a2b46f5f3a6e150309f8ce1bdad8438d65d574afa13119d281ee7d194\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 30 00:01:22.918839 kubelet[2333]: E1030 00:01:22.918783 2333 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://143.198.78.203:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 143.198.78.203:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 30 00:01:22.920041 containerd[1550]: time="2025-10-30T00:01:22.919749884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.1.0-n-705ef66fdc,Uid:fe7f17239baa0e662f050c0cec183e14,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb21ed15cff386b36e1ed63a3a9c66cf8d3f40d061623f9515ebb9bf1233085e\"" Oct 30 00:01:22.921313 kubelet[2333]: E1030 00:01:22.921278 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:01:22.922933 containerd[1550]: time="2025-10-30T00:01:22.922888798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.1.0-n-705ef66fdc,Uid:e507a1ee4bf3b60adee6a9a381491a75,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd5e1b1e60b9bda6f7ba1b902e64e53ed86a20edb56eb0f017875b4331c1062f\"" Oct 30 00:01:22.924047 kubelet[2333]: E1030 00:01:22.924011 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:01:22.928955 containerd[1550]: time="2025-10-30T00:01:22.928860055Z" level=info msg="CreateContainer within sandbox \"fb21ed15cff386b36e1ed63a3a9c66cf8d3f40d061623f9515ebb9bf1233085e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 30 00:01:22.930616 containerd[1550]: time="2025-10-30T00:01:22.930373625Z" level=info msg="CreateContainer within sandbox \"bd5e1b1e60b9bda6f7ba1b902e64e53ed86a20edb56eb0f017875b4331c1062f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 30 00:01:22.930920 containerd[1550]: time="2025-10-30T00:01:22.930795642Z" level=info msg="Container d1be211018edc255b3ebc15d5101fe49cf19f688a2c546c45c822a3260b5aef6: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:01:22.937586 containerd[1550]: time="2025-10-30T00:01:22.937280802Z" level=info msg="Container 36de5b1b29c2ebbcd9568d8ff08876d71489dd323bb5efb95218a22f85699086: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:01:22.943850 kubelet[2333]: E1030 00:01:22.943011 2333 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://143.198.78.203:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 143.198.78.203:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 30 00:01:22.963807 containerd[1550]: time="2025-10-30T00:01:22.963740559Z" level=info msg="CreateContainer within sandbox \"fb21ed15cff386b36e1ed63a3a9c66cf8d3f40d061623f9515ebb9bf1233085e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"36de5b1b29c2ebbcd9568d8ff08876d71489dd323bb5efb95218a22f85699086\"" Oct 30 00:01:22.966217 containerd[1550]: time="2025-10-30T00:01:22.966168968Z" level=info msg="StartContainer for \"36de5b1b29c2ebbcd9568d8ff08876d71489dd323bb5efb95218a22f85699086\"" Oct 30 00:01:22.968710 containerd[1550]: time="2025-10-30T00:01:22.968650458Z" level=info msg="Container 13a6f539929a969d79e4d9a84848d86a21b3439fee395f3c94131cde47867481: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:01:22.969846 containerd[1550]: time="2025-10-30T00:01:22.969788707Z" level=info msg="connecting to shim 36de5b1b29c2ebbcd9568d8ff08876d71489dd323bb5efb95218a22f85699086" address="unix:///run/containerd/s/8d7611425808e4dc985d0afc30692b13e37c2f361c70fe7c21f45a1eadd860a0" protocol=ttrpc version=3 Oct 30 00:01:22.972799 containerd[1550]: time="2025-10-30T00:01:22.972723794Z" level=info msg="CreateContainer within sandbox \"82c2549a2b46f5f3a6e150309f8ce1bdad8438d65d574afa13119d281ee7d194\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d1be211018edc255b3ebc15d5101fe49cf19f688a2c546c45c822a3260b5aef6\"" Oct 30 00:01:22.974013 containerd[1550]: time="2025-10-30T00:01:22.973960432Z" level=info msg="StartContainer for \"d1be211018edc255b3ebc15d5101fe49cf19f688a2c546c45c822a3260b5aef6\"" Oct 30 00:01:22.979487 containerd[1550]: time="2025-10-30T00:01:22.979400791Z" level=info msg="connecting to shim d1be211018edc255b3ebc15d5101fe49cf19f688a2c546c45c822a3260b5aef6" address="unix:///run/containerd/s/4d1c39e46d2aac780f0788a33641da2edbf12d154dc30c6106c1abdeb92e4586" protocol=ttrpc version=3 Oct 30 00:01:22.982309 containerd[1550]: time="2025-10-30T00:01:22.982224041Z" level=info msg="CreateContainer within sandbox \"bd5e1b1e60b9bda6f7ba1b902e64e53ed86a20edb56eb0f017875b4331c1062f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"13a6f539929a969d79e4d9a84848d86a21b3439fee395f3c94131cde47867481\"" Oct 30 00:01:22.985730 containerd[1550]: time="2025-10-30T00:01:22.984343718Z" level=info msg="StartContainer for \"13a6f539929a969d79e4d9a84848d86a21b3439fee395f3c94131cde47867481\"" Oct 30 00:01:22.989501 containerd[1550]: time="2025-10-30T00:01:22.989427004Z" level=info msg="connecting to shim 13a6f539929a969d79e4d9a84848d86a21b3439fee395f3c94131cde47867481" address="unix:///run/containerd/s/1acc96ccc3d87541c3f242331fda65762a2c8808c1333ddc8a437404cea8b2b6" protocol=ttrpc version=3 Oct 30 00:01:23.019921 kubelet[2333]: E1030 00:01:23.019869 2333 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://143.198.78.203:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.1.0-n-705ef66fdc&limit=500&resourceVersion=0\": dial tcp 143.198.78.203:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 30 00:01:23.021397 systemd[1]: Started cri-containerd-36de5b1b29c2ebbcd9568d8ff08876d71489dd323bb5efb95218a22f85699086.scope - libcontainer container 36de5b1b29c2ebbcd9568d8ff08876d71489dd323bb5efb95218a22f85699086. Oct 30 00:01:23.045425 systemd[1]: Started cri-containerd-d1be211018edc255b3ebc15d5101fe49cf19f688a2c546c45c822a3260b5aef6.scope - libcontainer container d1be211018edc255b3ebc15d5101fe49cf19f688a2c546c45c822a3260b5aef6. Oct 30 00:01:23.055407 systemd[1]: Started cri-containerd-13a6f539929a969d79e4d9a84848d86a21b3439fee395f3c94131cde47867481.scope - libcontainer container 13a6f539929a969d79e4d9a84848d86a21b3439fee395f3c94131cde47867481. Oct 30 00:01:23.167139 containerd[1550]: time="2025-10-30T00:01:23.165720891Z" level=info msg="StartContainer for \"36de5b1b29c2ebbcd9568d8ff08876d71489dd323bb5efb95218a22f85699086\" returns successfully" Oct 30 00:01:23.188138 containerd[1550]: time="2025-10-30T00:01:23.187730910Z" level=info msg="StartContainer for \"13a6f539929a969d79e4d9a84848d86a21b3439fee395f3c94131cde47867481\" returns successfully" Oct 30 00:01:23.225685 containerd[1550]: time="2025-10-30T00:01:23.225639069Z" level=info msg="StartContainer for \"d1be211018edc255b3ebc15d5101fe49cf19f688a2c546c45c822a3260b5aef6\" returns successfully" Oct 30 00:01:23.263015 kubelet[2333]: E1030 00:01:23.261275 2333 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://143.198.78.203:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 143.198.78.203:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 30 00:01:23.335123 kubelet[2333]: E1030 00:01:23.335058 2333 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.78.203:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.1.0-n-705ef66fdc?timeout=10s\": dial tcp 143.198.78.203:6443: connect: connection refused" interval="1.6s" Oct 30 00:01:23.534255 kubelet[2333]: I1030 00:01:23.534122 2333 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-n-705ef66fdc" Oct 30 00:01:24.012978 kubelet[2333]: E1030 00:01:24.012708 2333 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-705ef66fdc\" not found" node="ci-4459.1.0-n-705ef66fdc" Oct 30 00:01:24.012978 kubelet[2333]: E1030 00:01:24.012902 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:01:24.017725 kubelet[2333]: E1030 00:01:24.017637 2333 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-705ef66fdc\" not found" node="ci-4459.1.0-n-705ef66fdc" Oct 30 00:01:24.017916 kubelet[2333]: E1030 00:01:24.017829 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:01:24.022707 kubelet[2333]: E1030 00:01:24.022656 2333 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-705ef66fdc\" not found" node="ci-4459.1.0-n-705ef66fdc" Oct 30 00:01:24.022972 kubelet[2333]: E1030 00:01:24.022821 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:01:25.025847 kubelet[2333]: E1030 00:01:25.025805 2333 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-705ef66fdc\" not found" node="ci-4459.1.0-n-705ef66fdc" Oct 30 00:01:25.026427 kubelet[2333]: E1030 00:01:25.025957 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:01:25.026427 kubelet[2333]: E1030 00:01:25.026415 2333 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-705ef66fdc\" not found" node="ci-4459.1.0-n-705ef66fdc" Oct 30 00:01:25.026539 kubelet[2333]: E1030 00:01:25.026522 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:01:25.026812 kubelet[2333]: E1030 00:01:25.026794 2333 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-705ef66fdc\" not found" node="ci-4459.1.0-n-705ef66fdc" Oct 30 00:01:25.026924 kubelet[2333]: E1030 00:01:25.026896 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:01:25.601478 kubelet[2333]: E1030 00:01:25.601400 2333 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459.1.0-n-705ef66fdc\" not found" node="ci-4459.1.0-n-705ef66fdc" Oct 30 00:01:25.704885 kubelet[2333]: I1030 00:01:25.704836 2333 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.1.0-n-705ef66fdc" Oct 30 00:01:25.727338 kubelet[2333]: I1030 00:01:25.727121 2333 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.1.0-n-705ef66fdc" Oct 30 00:01:25.744865 kubelet[2333]: E1030 00:01:25.744821 2333 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.1.0-n-705ef66fdc\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.1.0-n-705ef66fdc" Oct 30 00:01:25.744865 kubelet[2333]: I1030 00:01:25.744862 2333 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.1.0-n-705ef66fdc" Oct 30 00:01:25.750163 kubelet[2333]: E1030 00:01:25.750106 2333 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.1.0-n-705ef66fdc\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.1.0-n-705ef66fdc" Oct 30 00:01:25.750163 kubelet[2333]: I1030 00:01:25.750152 2333 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.1.0-n-705ef66fdc" Oct 30 00:01:25.758116 kubelet[2333]: E1030 00:01:25.758056 2333 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.1.0-n-705ef66fdc\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.1.0-n-705ef66fdc" Oct 30 00:01:25.904423 kubelet[2333]: I1030 00:01:25.903887 2333 apiserver.go:52] "Watching apiserver" Oct 30 00:01:25.927409 kubelet[2333]: I1030 00:01:25.927349 2333 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 30 00:01:26.027523 kubelet[2333]: I1030 00:01:26.027027 2333 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.1.0-n-705ef66fdc" Oct 30 00:01:26.027523 kubelet[2333]: I1030 00:01:26.027173 2333 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.1.0-n-705ef66fdc" Oct 30 00:01:26.032115 kubelet[2333]: E1030 00:01:26.030653 2333 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.1.0-n-705ef66fdc\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.1.0-n-705ef66fdc" Oct 30 00:01:26.032115 kubelet[2333]: E1030 00:01:26.030943 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:01:26.034291 kubelet[2333]: E1030 00:01:26.034255 2333 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.1.0-n-705ef66fdc\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.1.0-n-705ef66fdc" Oct 30 00:01:26.034694 kubelet[2333]: E1030 00:01:26.034670 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:01:27.501717 systemd[1]: Reload requested from client PID 2612 ('systemctl') (unit session-7.scope)... Oct 30 00:01:27.501735 systemd[1]: Reloading... Oct 30 00:01:27.639128 zram_generator::config[2655]: No configuration found. Oct 30 00:01:27.987522 systemd[1]: Reloading finished in 485 ms. Oct 30 00:01:28.032025 kubelet[2333]: I1030 00:01:28.031945 2333 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 30 00:01:28.033025 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:01:28.043735 systemd[1]: kubelet.service: Deactivated successfully. Oct 30 00:01:28.044188 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:01:28.044288 systemd[1]: kubelet.service: Consumed 876ms CPU time, 126.7M memory peak. Oct 30 00:01:28.047287 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:01:28.233283 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:01:28.244941 (kubelet)[2706]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 30 00:01:28.323687 kubelet[2706]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 00:01:28.323687 kubelet[2706]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 30 00:01:28.323687 kubelet[2706]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 00:01:28.324407 kubelet[2706]: I1030 00:01:28.323751 2706 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 30 00:01:28.332913 kubelet[2706]: I1030 00:01:28.332803 2706 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 30 00:01:28.332913 kubelet[2706]: I1030 00:01:28.332844 2706 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 30 00:01:28.333343 kubelet[2706]: I1030 00:01:28.333212 2706 server.go:956] "Client rotation is on, will bootstrap in background" Oct 30 00:01:28.335195 kubelet[2706]: I1030 00:01:28.335148 2706 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 30 00:01:28.349969 kubelet[2706]: I1030 00:01:28.349245 2706 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 30 00:01:28.360565 kubelet[2706]: I1030 00:01:28.360534 2706 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 30 00:01:28.369401 kubelet[2706]: I1030 00:01:28.369315 2706 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 30 00:01:28.371099 kubelet[2706]: I1030 00:01:28.369594 2706 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 30 00:01:28.371099 kubelet[2706]: I1030 00:01:28.369655 2706 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.1.0-n-705ef66fdc","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 30 00:01:28.371099 kubelet[2706]: I1030 00:01:28.370003 2706 topology_manager.go:138] "Creating topology manager with none policy" Oct 30 00:01:28.371099 kubelet[2706]: I1030 00:01:28.370020 2706 container_manager_linux.go:303] "Creating device plugin manager" Oct 30 00:01:28.371099 kubelet[2706]: I1030 00:01:28.370100 2706 state_mem.go:36] "Initialized new in-memory state store" Oct 30 00:01:28.371454 kubelet[2706]: I1030 00:01:28.370321 2706 kubelet.go:480] "Attempting to sync node with API server" Oct 30 00:01:28.371454 kubelet[2706]: I1030 00:01:28.370341 2706 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 30 00:01:28.371454 kubelet[2706]: I1030 00:01:28.370380 2706 kubelet.go:386] "Adding apiserver pod source" Oct 30 00:01:28.372415 kubelet[2706]: I1030 00:01:28.372377 2706 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 30 00:01:28.378603 kubelet[2706]: I1030 00:01:28.378573 2706 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 30 00:01:28.379353 kubelet[2706]: I1030 00:01:28.379247 2706 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 30 00:01:28.388623 kubelet[2706]: I1030 00:01:28.388584 2706 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 30 00:01:28.389030 kubelet[2706]: I1030 00:01:28.388915 2706 server.go:1289] "Started kubelet" Oct 30 00:01:28.390844 kubelet[2706]: I1030 00:01:28.390563 2706 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 30 00:01:28.392893 kubelet[2706]: I1030 00:01:28.391912 2706 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 30 00:01:28.393296 kubelet[2706]: I1030 00:01:28.393276 2706 server.go:317] "Adding debug handlers to kubelet server" Oct 30 00:01:28.403390 kubelet[2706]: I1030 00:01:28.403305 2706 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 30 00:01:28.403778 kubelet[2706]: I1030 00:01:28.403755 2706 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 30 00:01:28.405381 kubelet[2706]: I1030 00:01:28.405337 2706 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 30 00:01:28.407594 kubelet[2706]: I1030 00:01:28.407559 2706 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 30 00:01:28.408242 kubelet[2706]: E1030 00:01:28.407930 2706 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-705ef66fdc\" not found" Oct 30 00:01:28.410719 kubelet[2706]: I1030 00:01:28.410532 2706 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 30 00:01:28.410719 kubelet[2706]: I1030 00:01:28.410686 2706 reconciler.go:26] "Reconciler: start to sync state" Oct 30 00:01:28.415265 kubelet[2706]: I1030 00:01:28.414947 2706 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 30 00:01:28.416200 kubelet[2706]: I1030 00:01:28.415946 2706 factory.go:223] Registration of the systemd container factory successfully Oct 30 00:01:28.416481 kubelet[2706]: I1030 00:01:28.416458 2706 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 30 00:01:28.418061 kubelet[2706]: I1030 00:01:28.418036 2706 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 30 00:01:28.418061 kubelet[2706]: I1030 00:01:28.418062 2706 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 30 00:01:28.418174 kubelet[2706]: I1030 00:01:28.418103 2706 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 30 00:01:28.418174 kubelet[2706]: I1030 00:01:28.418113 2706 kubelet.go:2436] "Starting kubelet main sync loop" Oct 30 00:01:28.418242 kubelet[2706]: E1030 00:01:28.418169 2706 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 30 00:01:28.421211 kubelet[2706]: I1030 00:01:28.420871 2706 factory.go:223] Registration of the containerd container factory successfully Oct 30 00:01:28.500007 kubelet[2706]: I1030 00:01:28.499868 2706 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 30 00:01:28.500007 kubelet[2706]: I1030 00:01:28.499891 2706 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 30 00:01:28.500007 kubelet[2706]: I1030 00:01:28.499921 2706 state_mem.go:36] "Initialized new in-memory state store" Oct 30 00:01:28.501594 kubelet[2706]: I1030 00:01:28.501298 2706 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 30 00:01:28.501594 kubelet[2706]: I1030 00:01:28.501320 2706 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 30 00:01:28.501594 kubelet[2706]: I1030 00:01:28.501340 2706 policy_none.go:49] "None policy: Start" Oct 30 00:01:28.501594 kubelet[2706]: I1030 00:01:28.501352 2706 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 30 00:01:28.501594 kubelet[2706]: I1030 00:01:28.501366 2706 state_mem.go:35] "Initializing new in-memory state store" Oct 30 00:01:28.501594 kubelet[2706]: I1030 00:01:28.501452 2706 state_mem.go:75] "Updated machine memory state" Oct 30 00:01:28.513104 kubelet[2706]: E1030 00:01:28.513055 2706 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 30 00:01:28.513589 kubelet[2706]: I1030 00:01:28.513535 2706 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 30 00:01:28.513726 kubelet[2706]: I1030 00:01:28.513556 2706 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 30 00:01:28.514377 kubelet[2706]: I1030 00:01:28.514362 2706 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 30 00:01:28.517972 kubelet[2706]: E1030 00:01:28.517736 2706 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 30 00:01:28.519063 kubelet[2706]: I1030 00:01:28.519014 2706 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.1.0-n-705ef66fdc" Oct 30 00:01:28.522104 kubelet[2706]: I1030 00:01:28.521481 2706 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.1.0-n-705ef66fdc" Oct 30 00:01:28.522104 kubelet[2706]: I1030 00:01:28.521876 2706 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.1.0-n-705ef66fdc" Oct 30 00:01:28.539961 kubelet[2706]: I1030 00:01:28.539918 2706 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 30 00:01:28.540513 kubelet[2706]: I1030 00:01:28.540489 2706 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 30 00:01:28.543825 kubelet[2706]: I1030 00:01:28.543791 2706 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 30 00:01:28.612130 kubelet[2706]: I1030 00:01:28.612032 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e507a1ee4bf3b60adee6a9a381491a75-ca-certs\") pod \"kube-apiserver-ci-4459.1.0-n-705ef66fdc\" (UID: \"e507a1ee4bf3b60adee6a9a381491a75\") " pod="kube-system/kube-apiserver-ci-4459.1.0-n-705ef66fdc" Oct 30 00:01:28.612505 kubelet[2706]: I1030 00:01:28.612465 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e507a1ee4bf3b60adee6a9a381491a75-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.1.0-n-705ef66fdc\" (UID: \"e507a1ee4bf3b60adee6a9a381491a75\") " pod="kube-system/kube-apiserver-ci-4459.1.0-n-705ef66fdc" Oct 30 00:01:28.612753 kubelet[2706]: I1030 00:01:28.612724 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fe7f17239baa0e662f050c0cec183e14-k8s-certs\") pod \"kube-controller-manager-ci-4459.1.0-n-705ef66fdc\" (UID: \"fe7f17239baa0e662f050c0cec183e14\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-705ef66fdc" Oct 30 00:01:28.612914 kubelet[2706]: I1030 00:01:28.612891 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe7f17239baa0e662f050c0cec183e14-kubeconfig\") pod \"kube-controller-manager-ci-4459.1.0-n-705ef66fdc\" (UID: \"fe7f17239baa0e662f050c0cec183e14\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-705ef66fdc" Oct 30 00:01:28.613036 kubelet[2706]: I1030 00:01:28.613015 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fe7f17239baa0e662f050c0cec183e14-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.1.0-n-705ef66fdc\" (UID: \"fe7f17239baa0e662f050c0cec183e14\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-705ef66fdc" Oct 30 00:01:28.613187 kubelet[2706]: I1030 00:01:28.613162 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e507a1ee4bf3b60adee6a9a381491a75-k8s-certs\") pod \"kube-apiserver-ci-4459.1.0-n-705ef66fdc\" (UID: \"e507a1ee4bf3b60adee6a9a381491a75\") " pod="kube-system/kube-apiserver-ci-4459.1.0-n-705ef66fdc" Oct 30 00:01:28.613324 kubelet[2706]: I1030 00:01:28.613304 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fe7f17239baa0e662f050c0cec183e14-ca-certs\") pod \"kube-controller-manager-ci-4459.1.0-n-705ef66fdc\" (UID: \"fe7f17239baa0e662f050c0cec183e14\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-705ef66fdc" Oct 30 00:01:28.613434 kubelet[2706]: I1030 00:01:28.613415 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fe7f17239baa0e662f050c0cec183e14-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.1.0-n-705ef66fdc\" (UID: \"fe7f17239baa0e662f050c0cec183e14\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-705ef66fdc" Oct 30 00:01:28.613563 kubelet[2706]: I1030 00:01:28.613542 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d17da3af8bb46998a9bdab335a60e0ef-kubeconfig\") pod \"kube-scheduler-ci-4459.1.0-n-705ef66fdc\" (UID: \"d17da3af8bb46998a9bdab335a60e0ef\") " pod="kube-system/kube-scheduler-ci-4459.1.0-n-705ef66fdc" Oct 30 00:01:28.625826 kubelet[2706]: I1030 00:01:28.625622 2706 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-n-705ef66fdc" Oct 30 00:01:28.637206 kubelet[2706]: I1030 00:01:28.636353 2706 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.1.0-n-705ef66fdc" Oct 30 00:01:28.637206 kubelet[2706]: I1030 00:01:28.636477 2706 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.1.0-n-705ef66fdc" Oct 30 00:01:28.841026 kubelet[2706]: E1030 00:01:28.840654 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:01:28.844017 kubelet[2706]: E1030 00:01:28.843892 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:01:28.845306 kubelet[2706]: E1030 00:01:28.845058 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:01:29.374256 kubelet[2706]: I1030 00:01:29.374198 2706 apiserver.go:52] "Watching apiserver" Oct 30 00:01:29.411287 kubelet[2706]: I1030 00:01:29.411245 2706 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 30 00:01:29.465709 kubelet[2706]: I1030 00:01:29.465681 2706 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.1.0-n-705ef66fdc" Oct 30 00:01:29.466446 kubelet[2706]: E1030 00:01:29.466418 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:01:29.468426 kubelet[2706]: I1030 00:01:29.468399 2706 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.1.0-n-705ef66fdc" Oct 30 00:01:29.515095 kubelet[2706]: I1030 00:01:29.514245 2706 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 30 00:01:29.515095 kubelet[2706]: E1030 00:01:29.514320 2706 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.1.0-n-705ef66fdc\" already exists" pod="kube-system/kube-scheduler-ci-4459.1.0-n-705ef66fdc" Oct 30 00:01:29.515095 kubelet[2706]: E1030 00:01:29.514491 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:01:29.522271 kubelet[2706]: I1030 00:01:29.522236 2706 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 30 00:01:29.522423 kubelet[2706]: E1030 00:01:29.522305 2706 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.1.0-n-705ef66fdc\" already exists" pod="kube-system/kube-apiserver-ci-4459.1.0-n-705ef66fdc" Oct 30 00:01:29.522520 kubelet[2706]: E1030 00:01:29.522503 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:01:29.578405 kubelet[2706]: I1030 00:01:29.578307 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.1.0-n-705ef66fdc" podStartSLOduration=1.578284541 podStartE2EDuration="1.578284541s" podCreationTimestamp="2025-10-30 00:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 00:01:29.55255216 +0000 UTC m=+1.298347937" watchObservedRunningTime="2025-10-30 00:01:29.578284541 +0000 UTC m=+1.324080316" Oct 30 00:01:29.598249 kubelet[2706]: I1030 00:01:29.598193 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.1.0-n-705ef66fdc" podStartSLOduration=1.5981746719999999 podStartE2EDuration="1.598174672s" podCreationTimestamp="2025-10-30 00:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 00:01:29.578668941 +0000 UTC m=+1.324464718" watchObservedRunningTime="2025-10-30 00:01:29.598174672 +0000 UTC m=+1.343970449" Oct 30 00:01:29.611767 kubelet[2706]: I1030 00:01:29.611662 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.1.0-n-705ef66fdc" podStartSLOduration=1.611646581 podStartE2EDuration="1.611646581s" podCreationTimestamp="2025-10-30 00:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 00:01:29.599170228 +0000 UTC m=+1.344966017" watchObservedRunningTime="2025-10-30 00:01:29.611646581 +0000 UTC m=+1.357442358" Oct 30 00:01:30.469110 kubelet[2706]: E1030 00:01:30.467684 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:01:30.469110 kubelet[2706]: E1030 00:01:30.467756 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:01:30.870880 kubelet[2706]: E1030 00:01:30.870714 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:01:33.597858 kubelet[2706]: I1030 00:01:33.597814 2706 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 30 00:01:33.599132 kubelet[2706]: I1030 00:01:33.599028 2706 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 30 00:01:33.599178 containerd[1550]: time="2025-10-30T00:01:33.598298732Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 30 00:01:34.669596 systemd[1]: Created slice kubepods-besteffort-pod7f19b011_cfc3_43f8_893c_40aa4e544f9c.slice - libcontainer container kubepods-besteffort-pod7f19b011_cfc3_43f8_893c_40aa4e544f9c.slice. Oct 30 00:01:34.747815 systemd[1]: Created slice kubepods-besteffort-pod4d163424_b72d_4a64_9a41_0e0db6e9b1dc.slice - libcontainer container kubepods-besteffort-pod4d163424_b72d_4a64_9a41_0e0db6e9b1dc.slice. Oct 30 00:01:34.752773 kubelet[2706]: I1030 00:01:34.752739 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7f19b011-cfc3-43f8-893c-40aa4e544f9c-xtables-lock\") pod \"kube-proxy-lcmp5\" (UID: \"7f19b011-cfc3-43f8-893c-40aa4e544f9c\") " pod="kube-system/kube-proxy-lcmp5" Oct 30 00:01:34.753367 kubelet[2706]: I1030 00:01:34.752854 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7f19b011-cfc3-43f8-893c-40aa4e544f9c-lib-modules\") pod \"kube-proxy-lcmp5\" (UID: \"7f19b011-cfc3-43f8-893c-40aa4e544f9c\") " pod="kube-system/kube-proxy-lcmp5" Oct 30 00:01:34.754299 kubelet[2706]: I1030 00:01:34.754042 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk68c\" (UniqueName: \"kubernetes.io/projected/7f19b011-cfc3-43f8-893c-40aa4e544f9c-kube-api-access-tk68c\") pod \"kube-proxy-lcmp5\" (UID: \"7f19b011-cfc3-43f8-893c-40aa4e544f9c\") " pod="kube-system/kube-proxy-lcmp5" Oct 30 00:01:34.754299 kubelet[2706]: I1030 00:01:34.754144 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9kqj\" (UniqueName: \"kubernetes.io/projected/4d163424-b72d-4a64-9a41-0e0db6e9b1dc-kube-api-access-l9kqj\") pod \"tigera-operator-7dcd859c48-v82fw\" (UID: \"4d163424-b72d-4a64-9a41-0e0db6e9b1dc\") " pod="tigera-operator/tigera-operator-7dcd859c48-v82fw" Oct 30 00:01:34.754299 kubelet[2706]: I1030 00:01:34.754194 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7f19b011-cfc3-43f8-893c-40aa4e544f9c-kube-proxy\") pod \"kube-proxy-lcmp5\" (UID: \"7f19b011-cfc3-43f8-893c-40aa4e544f9c\") " pod="kube-system/kube-proxy-lcmp5" Oct 30 00:01:34.754299 kubelet[2706]: I1030 00:01:34.754216 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4d163424-b72d-4a64-9a41-0e0db6e9b1dc-var-lib-calico\") pod \"tigera-operator-7dcd859c48-v82fw\" (UID: \"4d163424-b72d-4a64-9a41-0e0db6e9b1dc\") " pod="tigera-operator/tigera-operator-7dcd859c48-v82fw" Oct 30 00:01:34.979162 kubelet[2706]: E1030 00:01:34.978746 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:01:34.980204 containerd[1550]: time="2025-10-30T00:01:34.980159446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lcmp5,Uid:7f19b011-cfc3-43f8-893c-40aa4e544f9c,Namespace:kube-system,Attempt:0,}" Oct 30 00:01:35.005102 containerd[1550]: time="2025-10-30T00:01:35.003463054Z" level=info msg="connecting to shim ad2dd0dc4dbcc1047ce08833bafa1f1f5d4e731e684c8fd264edb173fabd9d0d" address="unix:///run/containerd/s/eff55811930b3b8d24827645ca2cb4b3b0cf88651521e66d865acef525e45d9c" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:01:35.045385 systemd[1]: Started cri-containerd-ad2dd0dc4dbcc1047ce08833bafa1f1f5d4e731e684c8fd264edb173fabd9d0d.scope - libcontainer container ad2dd0dc4dbcc1047ce08833bafa1f1f5d4e731e684c8fd264edb173fabd9d0d. Oct 30 00:01:35.057428 containerd[1550]: time="2025-10-30T00:01:35.057378382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-v82fw,Uid:4d163424-b72d-4a64-9a41-0e0db6e9b1dc,Namespace:tigera-operator,Attempt:0,}" Oct 30 00:01:35.093238 containerd[1550]: time="2025-10-30T00:01:35.093057950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lcmp5,Uid:7f19b011-cfc3-43f8-893c-40aa4e544f9c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad2dd0dc4dbcc1047ce08833bafa1f1f5d4e731e684c8fd264edb173fabd9d0d\"" Oct 30 00:01:35.095561 kubelet[2706]: E1030 00:01:35.095433 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:01:35.103114 containerd[1550]: time="2025-10-30T00:01:35.102843561Z" level=info msg="connecting to shim 6a53c799879a85eac7119dcf8d2d6e21224dcbacc71a1152ad7b0523fa7abd5e" address="unix:///run/containerd/s/0f8d503d35984c95c89389485b8afefdb9f1cede6f0ac9f71b314da0a9d7804a" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:01:35.105055 containerd[1550]: time="2025-10-30T00:01:35.104779508Z" level=info msg="CreateContainer within sandbox \"ad2dd0dc4dbcc1047ce08833bafa1f1f5d4e731e684c8fd264edb173fabd9d0d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 30 00:01:35.128131 containerd[1550]: time="2025-10-30T00:01:35.126686121Z" level=info msg="Container c68ad8853d1d7c5b2c0e215ef7a7ad807a9fe119b95a96bc4fd76c7b159a552d: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:01:35.137813 containerd[1550]: time="2025-10-30T00:01:35.137760208Z" level=info msg="CreateContainer within sandbox \"ad2dd0dc4dbcc1047ce08833bafa1f1f5d4e731e684c8fd264edb173fabd9d0d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c68ad8853d1d7c5b2c0e215ef7a7ad807a9fe119b95a96bc4fd76c7b159a552d\"" Oct 30 00:01:35.138877 containerd[1550]: time="2025-10-30T00:01:35.138809792Z" level=info msg="StartContainer for \"c68ad8853d1d7c5b2c0e215ef7a7ad807a9fe119b95a96bc4fd76c7b159a552d\"" Oct 30 00:01:35.142488 containerd[1550]: time="2025-10-30T00:01:35.142440054Z" level=info msg="connecting to shim c68ad8853d1d7c5b2c0e215ef7a7ad807a9fe119b95a96bc4fd76c7b159a552d" address="unix:///run/containerd/s/eff55811930b3b8d24827645ca2cb4b3b0cf88651521e66d865acef525e45d9c" protocol=ttrpc version=3 Oct 30 00:01:35.154329 systemd[1]: Started cri-containerd-6a53c799879a85eac7119dcf8d2d6e21224dcbacc71a1152ad7b0523fa7abd5e.scope - libcontainer container 6a53c799879a85eac7119dcf8d2d6e21224dcbacc71a1152ad7b0523fa7abd5e. Oct 30 00:01:35.182362 systemd[1]: Started cri-containerd-c68ad8853d1d7c5b2c0e215ef7a7ad807a9fe119b95a96bc4fd76c7b159a552d.scope - libcontainer container c68ad8853d1d7c5b2c0e215ef7a7ad807a9fe119b95a96bc4fd76c7b159a552d. Oct 30 00:01:35.239858 containerd[1550]: time="2025-10-30T00:01:35.239275007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-v82fw,Uid:4d163424-b72d-4a64-9a41-0e0db6e9b1dc,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6a53c799879a85eac7119dcf8d2d6e21224dcbacc71a1152ad7b0523fa7abd5e\"" Oct 30 00:01:35.246862 containerd[1550]: time="2025-10-30T00:01:35.245998336Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 30 00:01:35.250956 systemd-resolved[1385]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Oct 30 00:01:35.260595 containerd[1550]: time="2025-10-30T00:01:35.260546820Z" level=info msg="StartContainer for \"c68ad8853d1d7c5b2c0e215ef7a7ad807a9fe119b95a96bc4fd76c7b159a552d\" returns successfully" Oct 30 00:01:35.368604 kubelet[2706]: E1030 00:01:35.368546 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:01:35.486772 kubelet[2706]: E1030 00:01:35.486338 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:01:35.488726 kubelet[2706]: E1030 00:01:35.488556 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:01:35.516979 kubelet[2706]: I1030 00:01:35.516629 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lcmp5" podStartSLOduration=1.516605845 podStartE2EDuration="1.516605845s" podCreationTimestamp="2025-10-30 00:01:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 00:01:35.500622484 +0000 UTC m=+7.246418261" watchObservedRunningTime="2025-10-30 00:01:35.516605845 +0000 UTC m=+7.262401626" Oct 30 00:01:36.917306 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount57384267.mount: Deactivated successfully. Oct 30 00:01:37.388187 kubelet[2706]: E1030 00:01:37.387401 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:01:37.494701 kubelet[2706]: E1030 00:01:37.494635 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:01:38.296441 containerd[1550]: time="2025-10-30T00:01:38.296361817Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:01:38.298145 containerd[1550]: time="2025-10-30T00:01:38.298089671Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Oct 30 00:01:38.298600 containerd[1550]: time="2025-10-30T00:01:38.298557755Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:01:38.304463 containerd[1550]: time="2025-10-30T00:01:38.304347554Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:01:38.305701 containerd[1550]: time="2025-10-30T00:01:38.305504166Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.058444102s" Oct 30 00:01:38.305701 containerd[1550]: time="2025-10-30T00:01:38.305577542Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Oct 30 00:01:38.311690 containerd[1550]: time="2025-10-30T00:01:38.311626107Z" level=info msg="CreateContainer within sandbox \"6a53c799879a85eac7119dcf8d2d6e21224dcbacc71a1152ad7b0523fa7abd5e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 30 00:01:38.327128 containerd[1550]: time="2025-10-30T00:01:38.327059942Z" level=info msg="Container c7ae8e3f1fd3b0888a6e57f8a7ed711ed3f97959cf7bb5a1765fbaad9317c2f3: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:01:38.335422 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2922007308.mount: Deactivated successfully. Oct 30 00:01:38.339195 containerd[1550]: time="2025-10-30T00:01:38.339080319Z" level=info msg="CreateContainer within sandbox \"6a53c799879a85eac7119dcf8d2d6e21224dcbacc71a1152ad7b0523fa7abd5e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c7ae8e3f1fd3b0888a6e57f8a7ed711ed3f97959cf7bb5a1765fbaad9317c2f3\"" Oct 30 00:01:38.340546 containerd[1550]: time="2025-10-30T00:01:38.340509082Z" level=info msg="StartContainer for \"c7ae8e3f1fd3b0888a6e57f8a7ed711ed3f97959cf7bb5a1765fbaad9317c2f3\"" Oct 30 00:01:38.342471 containerd[1550]: time="2025-10-30T00:01:38.342404869Z" level=info msg="connecting to shim c7ae8e3f1fd3b0888a6e57f8a7ed711ed3f97959cf7bb5a1765fbaad9317c2f3" address="unix:///run/containerd/s/0f8d503d35984c95c89389485b8afefdb9f1cede6f0ac9f71b314da0a9d7804a" protocol=ttrpc version=3 Oct 30 00:01:38.370288 systemd[1]: Started cri-containerd-c7ae8e3f1fd3b0888a6e57f8a7ed711ed3f97959cf7bb5a1765fbaad9317c2f3.scope - libcontainer container c7ae8e3f1fd3b0888a6e57f8a7ed711ed3f97959cf7bb5a1765fbaad9317c2f3. Oct 30 00:01:38.407300 containerd[1550]: time="2025-10-30T00:01:38.407254523Z" level=info msg="StartContainer for \"c7ae8e3f1fd3b0888a6e57f8a7ed711ed3f97959cf7bb5a1765fbaad9317c2f3\" returns successfully" Oct 30 00:01:40.881561 kubelet[2706]: E1030 00:01:40.881484 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:01:40.943452 kubelet[2706]: I1030 00:01:40.943031 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-v82fw" podStartSLOduration=3.8802422549999998 podStartE2EDuration="6.94300892s" podCreationTimestamp="2025-10-30 00:01:34 +0000 UTC" firstStartedPulling="2025-10-30 00:01:35.244690617 +0000 UTC m=+6.990486387" lastFinishedPulling="2025-10-30 00:01:38.307457276 +0000 UTC m=+10.053253052" observedRunningTime="2025-10-30 00:01:38.523823175 +0000 UTC m=+10.269618953" watchObservedRunningTime="2025-10-30 00:01:40.94300892 +0000 UTC m=+12.688804695" Oct 30 00:01:43.170168 update_engine[1525]: I20251030 00:01:43.169970 1525 update_attempter.cc:509] Updating boot flags... Oct 30 00:01:45.235742 sudo[1771]: pam_unix(sudo:session): session closed for user root Oct 30 00:01:45.239468 sshd[1770]: Connection closed by 139.178.89.65 port 35288 Oct 30 00:01:45.240441 sshd-session[1767]: pam_unix(sshd:session): session closed for user core Oct 30 00:01:45.248461 systemd-logind[1516]: Session 7 logged out. Waiting for processes to exit. Oct 30 00:01:45.249775 systemd[1]: sshd@6-143.198.78.203:22-139.178.89.65:35288.service: Deactivated successfully. Oct 30 00:01:45.254761 systemd[1]: session-7.scope: Deactivated successfully. Oct 30 00:01:45.255504 systemd[1]: session-7.scope: Consumed 5.163s CPU time, 158.4M memory peak. Oct 30 00:01:45.259450 systemd-logind[1516]: Removed session 7. Oct 30 00:01:51.724423 systemd[1]: Created slice kubepods-besteffort-pod38df7c76_b1cc_407d_9eca_14c43e1b39d9.slice - libcontainer container kubepods-besteffort-pod38df7c76_b1cc_407d_9eca_14c43e1b39d9.slice. Oct 30 00:01:51.770186 kubelet[2706]: I1030 00:01:51.770135 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/38df7c76-b1cc-407d-9eca-14c43e1b39d9-typha-certs\") pod \"calico-typha-b64d78964-zhn6k\" (UID: \"38df7c76-b1cc-407d-9eca-14c43e1b39d9\") " pod="calico-system/calico-typha-b64d78964-zhn6k" Oct 30 00:01:51.770186 kubelet[2706]: I1030 00:01:51.770186 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldfbv\" (UniqueName: \"kubernetes.io/projected/38df7c76-b1cc-407d-9eca-14c43e1b39d9-kube-api-access-ldfbv\") pod \"calico-typha-b64d78964-zhn6k\" (UID: \"38df7c76-b1cc-407d-9eca-14c43e1b39d9\") " pod="calico-system/calico-typha-b64d78964-zhn6k" Oct 30 00:01:51.770769 kubelet[2706]: I1030 00:01:51.770209 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38df7c76-b1cc-407d-9eca-14c43e1b39d9-tigera-ca-bundle\") pod \"calico-typha-b64d78964-zhn6k\" (UID: \"38df7c76-b1cc-407d-9eca-14c43e1b39d9\") " pod="calico-system/calico-typha-b64d78964-zhn6k" Oct 30 00:01:52.006415 systemd[1]: Created slice kubepods-besteffort-pod74d201a6_79d9_4221_8b7a_94bb58c20ced.slice - libcontainer container kubepods-besteffort-pod74d201a6_79d9_4221_8b7a_94bb58c20ced.slice. Oct 30 00:01:52.030781 kubelet[2706]: E1030 00:01:52.030659 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:01:52.031928 containerd[1550]: time="2025-10-30T00:01:52.031877020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b64d78964-zhn6k,Uid:38df7c76-b1cc-407d-9eca-14c43e1b39d9,Namespace:calico-system,Attempt:0,}" Oct 30 00:01:52.062134 containerd[1550]: time="2025-10-30T00:01:52.061625345Z" level=info msg="connecting to shim 128c3a0f6560272da5c624e7562ca8b7f7b2004ab3e57967164c1c8e761ce448" address="unix:///run/containerd/s/17be13cc95b9f3fe8b49908d81ebfa5e53dc4be70db849812ffc8240d49ae17d" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:01:52.076111 kubelet[2706]: I1030 00:01:52.074283 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/74d201a6-79d9-4221-8b7a-94bb58c20ced-policysync\") pod \"calico-node-qtjwx\" (UID: \"74d201a6-79d9-4221-8b7a-94bb58c20ced\") " pod="calico-system/calico-node-qtjwx" Oct 30 00:01:52.081413 kubelet[2706]: I1030 00:01:52.079262 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/74d201a6-79d9-4221-8b7a-94bb58c20ced-node-certs\") pod \"calico-node-qtjwx\" (UID: \"74d201a6-79d9-4221-8b7a-94bb58c20ced\") " pod="calico-system/calico-node-qtjwx" Oct 30 00:01:52.081413 kubelet[2706]: I1030 00:01:52.079362 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/74d201a6-79d9-4221-8b7a-94bb58c20ced-var-lib-calico\") pod \"calico-node-qtjwx\" (UID: \"74d201a6-79d9-4221-8b7a-94bb58c20ced\") " pod="calico-system/calico-node-qtjwx" Oct 30 00:01:52.081413 kubelet[2706]: I1030 00:01:52.079389 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/74d201a6-79d9-4221-8b7a-94bb58c20ced-cni-log-dir\") pod \"calico-node-qtjwx\" (UID: \"74d201a6-79d9-4221-8b7a-94bb58c20ced\") " pod="calico-system/calico-node-qtjwx" Oct 30 00:01:52.081413 kubelet[2706]: I1030 00:01:52.079436 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/74d201a6-79d9-4221-8b7a-94bb58c20ced-cni-net-dir\") pod \"calico-node-qtjwx\" (UID: \"74d201a6-79d9-4221-8b7a-94bb58c20ced\") " pod="calico-system/calico-node-qtjwx" Oct 30 00:01:52.081413 kubelet[2706]: I1030 00:01:52.079462 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/74d201a6-79d9-4221-8b7a-94bb58c20ced-flexvol-driver-host\") pod \"calico-node-qtjwx\" (UID: \"74d201a6-79d9-4221-8b7a-94bb58c20ced\") " pod="calico-system/calico-node-qtjwx" Oct 30 00:01:52.081750 kubelet[2706]: I1030 00:01:52.079519 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74d201a6-79d9-4221-8b7a-94bb58c20ced-xtables-lock\") pod \"calico-node-qtjwx\" (UID: \"74d201a6-79d9-4221-8b7a-94bb58c20ced\") " pod="calico-system/calico-node-qtjwx" Oct 30 00:01:52.081750 kubelet[2706]: I1030 00:01:52.079568 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74d201a6-79d9-4221-8b7a-94bb58c20ced-tigera-ca-bundle\") pod \"calico-node-qtjwx\" (UID: \"74d201a6-79d9-4221-8b7a-94bb58c20ced\") " pod="calico-system/calico-node-qtjwx" Oct 30 00:01:52.081750 kubelet[2706]: I1030 00:01:52.079592 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/74d201a6-79d9-4221-8b7a-94bb58c20ced-var-run-calico\") pod \"calico-node-qtjwx\" (UID: \"74d201a6-79d9-4221-8b7a-94bb58c20ced\") " pod="calico-system/calico-node-qtjwx" Oct 30 00:01:52.081750 kubelet[2706]: I1030 00:01:52.081250 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nzcz\" (UniqueName: \"kubernetes.io/projected/74d201a6-79d9-4221-8b7a-94bb58c20ced-kube-api-access-8nzcz\") pod \"calico-node-qtjwx\" (UID: \"74d201a6-79d9-4221-8b7a-94bb58c20ced\") " pod="calico-system/calico-node-qtjwx" Oct 30 00:01:52.082068 kubelet[2706]: I1030 00:01:52.081357 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/74d201a6-79d9-4221-8b7a-94bb58c20ced-cni-bin-dir\") pod \"calico-node-qtjwx\" (UID: \"74d201a6-79d9-4221-8b7a-94bb58c20ced\") " pod="calico-system/calico-node-qtjwx" Oct 30 00:01:52.082068 kubelet[2706]: I1030 00:01:52.082022 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74d201a6-79d9-4221-8b7a-94bb58c20ced-lib-modules\") pod \"calico-node-qtjwx\" (UID: \"74d201a6-79d9-4221-8b7a-94bb58c20ced\") " pod="calico-system/calico-node-qtjwx" Oct 30 00:01:52.120357 systemd[1]: Started cri-containerd-128c3a0f6560272da5c624e7562ca8b7f7b2004ab3e57967164c1c8e761ce448.scope - libcontainer container 128c3a0f6560272da5c624e7562ca8b7f7b2004ab3e57967164c1c8e761ce448. Oct 30 00:01:52.186463 kubelet[2706]: E1030 00:01:52.186405 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.186463 kubelet[2706]: W1030 00:01:52.186432 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.186463 kubelet[2706]: E1030 00:01:52.186458 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.187375 kubelet[2706]: E1030 00:01:52.186773 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.187375 kubelet[2706]: W1030 00:01:52.186785 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.187375 kubelet[2706]: E1030 00:01:52.186802 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.187375 kubelet[2706]: E1030 00:01:52.187046 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.187375 kubelet[2706]: W1030 00:01:52.187058 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.187375 kubelet[2706]: E1030 00:01:52.187089 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.189249 kubelet[2706]: E1030 00:01:52.188240 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.189249 kubelet[2706]: W1030 00:01:52.188262 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.189249 kubelet[2706]: E1030 00:01:52.188280 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.189249 kubelet[2706]: E1030 00:01:52.189124 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.189249 kubelet[2706]: W1030 00:01:52.189137 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.189249 kubelet[2706]: E1030 00:01:52.189152 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.190797 kubelet[2706]: E1030 00:01:52.190330 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.190797 kubelet[2706]: W1030 00:01:52.190350 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.190797 kubelet[2706]: E1030 00:01:52.190367 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.190797 kubelet[2706]: E1030 00:01:52.190570 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.190797 kubelet[2706]: W1030 00:01:52.190578 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.190797 kubelet[2706]: E1030 00:01:52.190590 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.190797 kubelet[2706]: E1030 00:01:52.190802 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.190797 kubelet[2706]: W1030 00:01:52.190810 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.191132 kubelet[2706]: E1030 00:01:52.190820 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.195779 kubelet[2706]: E1030 00:01:52.195717 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.196557 kubelet[2706]: W1030 00:01:52.196067 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.196557 kubelet[2706]: E1030 00:01:52.196314 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.199891 kubelet[2706]: E1030 00:01:52.199154 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.199891 kubelet[2706]: W1030 00:01:52.199182 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.199891 kubelet[2706]: E1030 00:01:52.199209 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.208619 kubelet[2706]: E1030 00:01:52.208582 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.208619 kubelet[2706]: W1030 00:01:52.208611 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.209604 kubelet[2706]: E1030 00:01:52.208632 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.210786 kubelet[2706]: E1030 00:01:52.210522 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.210786 kubelet[2706]: W1030 00:01:52.210544 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.210786 kubelet[2706]: E1030 00:01:52.210565 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.210915 kubelet[2706]: E1030 00:01:52.210839 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.210915 kubelet[2706]: W1030 00:01:52.210854 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.210915 kubelet[2706]: E1030 00:01:52.210870 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.213122 kubelet[2706]: E1030 00:01:52.213056 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.213358 kubelet[2706]: W1030 00:01:52.213136 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.213358 kubelet[2706]: E1030 00:01:52.213163 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.213752 kubelet[2706]: E1030 00:01:52.213663 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.213752 kubelet[2706]: W1030 00:01:52.213674 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.213752 kubelet[2706]: E1030 00:01:52.213687 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.214306 kubelet[2706]: E1030 00:01:52.214289 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.214306 kubelet[2706]: W1030 00:01:52.214304 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.214807 kubelet[2706]: E1030 00:01:52.214315 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.216726 kubelet[2706]: E1030 00:01:52.216639 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.216726 kubelet[2706]: W1030 00:01:52.216656 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.216726 kubelet[2706]: E1030 00:01:52.216669 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.219141 kubelet[2706]: E1030 00:01:52.218993 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.219141 kubelet[2706]: W1030 00:01:52.219025 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.219141 kubelet[2706]: E1030 00:01:52.219041 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.220119 kubelet[2706]: E1030 00:01:52.220057 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.220425 kubelet[2706]: W1030 00:01:52.220398 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.220496 kubelet[2706]: E1030 00:01:52.220423 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.223110 kubelet[2706]: E1030 00:01:52.223071 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.223110 kubelet[2706]: W1030 00:01:52.223101 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.223110 kubelet[2706]: E1030 00:01:52.223119 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.224332 kubelet[2706]: E1030 00:01:52.224310 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.224332 kubelet[2706]: W1030 00:01:52.224329 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.224458 kubelet[2706]: E1030 00:01:52.224346 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.224715 kubelet[2706]: E1030 00:01:52.224697 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.224715 kubelet[2706]: W1030 00:01:52.224711 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.224807 kubelet[2706]: E1030 00:01:52.224722 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.225266 kubelet[2706]: E1030 00:01:52.225248 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.225266 kubelet[2706]: W1030 00:01:52.225263 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.225357 kubelet[2706]: E1030 00:01:52.225275 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.231704 containerd[1550]: time="2025-10-30T00:01:52.231652133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b64d78964-zhn6k,Uid:38df7c76-b1cc-407d-9eca-14c43e1b39d9,Namespace:calico-system,Attempt:0,} returns sandbox id \"128c3a0f6560272da5c624e7562ca8b7f7b2004ab3e57967164c1c8e761ce448\"" Oct 30 00:01:52.234485 kubelet[2706]: E1030 00:01:52.234447 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:01:52.235958 containerd[1550]: time="2025-10-30T00:01:52.235913340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 30 00:01:52.265300 kubelet[2706]: E1030 00:01:52.265145 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5kjbz" podUID="7c170e9a-cada-41cd-bd7c-14ab708f01d4" Oct 30 00:01:52.312011 kubelet[2706]: E1030 00:01:52.311966 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:01:52.315171 containerd[1550]: time="2025-10-30T00:01:52.315122558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qtjwx,Uid:74d201a6-79d9-4221-8b7a-94bb58c20ced,Namespace:calico-system,Attempt:0,}" Oct 30 00:01:52.346901 containerd[1550]: time="2025-10-30T00:01:52.346817392Z" level=info msg="connecting to shim b7d80fa3eda210354ea59041d6bc79a26425b073f366e66fe87758f49c9b818d" address="unix:///run/containerd/s/0c25e700e6f410bd03e0a7c8337e79f3e34fb35cffe8399ea438cec42820fa8a" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:01:52.354520 kubelet[2706]: E1030 00:01:52.354470 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.354520 kubelet[2706]: W1030 00:01:52.354506 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.354520 kubelet[2706]: E1030 00:01:52.354540 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.355318 kubelet[2706]: E1030 00:01:52.354840 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.355318 kubelet[2706]: W1030 00:01:52.354856 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.355318 kubelet[2706]: E1030 00:01:52.354877 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.355318 kubelet[2706]: E1030 00:01:52.355298 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.355318 kubelet[2706]: W1030 00:01:52.355313 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.355318 kubelet[2706]: E1030 00:01:52.355330 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.367405 kubelet[2706]: E1030 00:01:52.367335 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.367405 kubelet[2706]: W1030 00:01:52.367367 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.367405 kubelet[2706]: E1030 00:01:52.367395 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.369449 kubelet[2706]: E1030 00:01:52.367636 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.369449 kubelet[2706]: W1030 00:01:52.367644 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.369449 kubelet[2706]: E1030 00:01:52.367653 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.369449 kubelet[2706]: E1030 00:01:52.367802 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.369449 kubelet[2706]: W1030 00:01:52.367808 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.369449 kubelet[2706]: E1030 00:01:52.367815 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.369449 kubelet[2706]: E1030 00:01:52.367992 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.369449 kubelet[2706]: W1030 00:01:52.368003 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.369449 kubelet[2706]: E1030 00:01:52.368015 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.369449 kubelet[2706]: E1030 00:01:52.368333 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.369809 kubelet[2706]: W1030 00:01:52.368349 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.369809 kubelet[2706]: E1030 00:01:52.368364 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.369809 kubelet[2706]: E1030 00:01:52.368649 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.369809 kubelet[2706]: W1030 00:01:52.368664 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.369809 kubelet[2706]: E1030 00:01:52.368678 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.369809 kubelet[2706]: E1030 00:01:52.369442 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.369809 kubelet[2706]: W1030 00:01:52.369463 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.369809 kubelet[2706]: E1030 00:01:52.369481 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.369809 kubelet[2706]: E1030 00:01:52.369737 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.369809 kubelet[2706]: W1030 00:01:52.369751 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.371343 kubelet[2706]: E1030 00:01:52.369766 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.371343 kubelet[2706]: E1030 00:01:52.370240 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.371343 kubelet[2706]: W1030 00:01:52.370256 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.371343 kubelet[2706]: E1030 00:01:52.370270 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.372390 kubelet[2706]: E1030 00:01:52.372355 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.372390 kubelet[2706]: W1030 00:01:52.372379 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.372537 kubelet[2706]: E1030 00:01:52.372408 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.373432 kubelet[2706]: E1030 00:01:52.372665 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.373432 kubelet[2706]: W1030 00:01:52.372685 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.373432 kubelet[2706]: E1030 00:01:52.372701 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.373432 kubelet[2706]: E1030 00:01:52.372931 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.373432 kubelet[2706]: W1030 00:01:52.372942 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.373432 kubelet[2706]: E1030 00:01:52.372954 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.373432 kubelet[2706]: E1030 00:01:52.373212 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.373432 kubelet[2706]: W1030 00:01:52.373237 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.373432 kubelet[2706]: E1030 00:01:52.373255 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.373905 kubelet[2706]: E1030 00:01:52.373496 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.373905 kubelet[2706]: W1030 00:01:52.373505 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.373905 kubelet[2706]: E1030 00:01:52.373516 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.373905 kubelet[2706]: E1030 00:01:52.373704 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.373905 kubelet[2706]: W1030 00:01:52.373715 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.373905 kubelet[2706]: E1030 00:01:52.373727 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.376126 kubelet[2706]: E1030 00:01:52.373952 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.376126 kubelet[2706]: W1030 00:01:52.373963 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.376126 kubelet[2706]: E1030 00:01:52.373974 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.376126 kubelet[2706]: E1030 00:01:52.374180 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.376126 kubelet[2706]: W1030 00:01:52.374189 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.376126 kubelet[2706]: E1030 00:01:52.374199 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.389245 kubelet[2706]: E1030 00:01:52.389195 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.389245 kubelet[2706]: W1030 00:01:52.389229 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.389245 kubelet[2706]: E1030 00:01:52.389257 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.389480 kubelet[2706]: I1030 00:01:52.389308 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gfhq\" (UniqueName: \"kubernetes.io/projected/7c170e9a-cada-41cd-bd7c-14ab708f01d4-kube-api-access-9gfhq\") pod \"csi-node-driver-5kjbz\" (UID: \"7c170e9a-cada-41cd-bd7c-14ab708f01d4\") " pod="calico-system/csi-node-driver-5kjbz" Oct 30 00:01:52.389723 kubelet[2706]: E1030 00:01:52.389704 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.389777 kubelet[2706]: W1030 00:01:52.389724 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.389777 kubelet[2706]: E1030 00:01:52.389743 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.389777 kubelet[2706]: I1030 00:01:52.389770 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7c170e9a-cada-41cd-bd7c-14ab708f01d4-kubelet-dir\") pod \"csi-node-driver-5kjbz\" (UID: \"7c170e9a-cada-41cd-bd7c-14ab708f01d4\") " pod="calico-system/csi-node-driver-5kjbz" Oct 30 00:01:52.390037 kubelet[2706]: E1030 00:01:52.390009 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.390037 kubelet[2706]: W1030 00:01:52.390025 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.390037 kubelet[2706]: E1030 00:01:52.390037 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.390618 kubelet[2706]: I1030 00:01:52.390061 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7c170e9a-cada-41cd-bd7c-14ab708f01d4-socket-dir\") pod \"csi-node-driver-5kjbz\" (UID: \"7c170e9a-cada-41cd-bd7c-14ab708f01d4\") " pod="calico-system/csi-node-driver-5kjbz" Oct 30 00:01:52.390618 kubelet[2706]: E1030 00:01:52.390490 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.390618 kubelet[2706]: W1030 00:01:52.390501 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.390618 kubelet[2706]: E1030 00:01:52.390519 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.391306 kubelet[2706]: E1030 00:01:52.391048 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.391306 kubelet[2706]: W1030 00:01:52.391060 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.391306 kubelet[2706]: E1030 00:01:52.391090 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.394916 kubelet[2706]: E1030 00:01:52.394866 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.394916 kubelet[2706]: W1030 00:01:52.394893 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.394916 kubelet[2706]: E1030 00:01:52.394921 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.399303 kubelet[2706]: E1030 00:01:52.399260 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.399303 kubelet[2706]: W1030 00:01:52.399296 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.399470 kubelet[2706]: E1030 00:01:52.399336 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.400108 kubelet[2706]: E1030 00:01:52.399812 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.400108 kubelet[2706]: W1030 00:01:52.399829 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.400108 kubelet[2706]: E1030 00:01:52.399850 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.400108 kubelet[2706]: I1030 00:01:52.399948 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7c170e9a-cada-41cd-bd7c-14ab708f01d4-registration-dir\") pod \"csi-node-driver-5kjbz\" (UID: \"7c170e9a-cada-41cd-bd7c-14ab708f01d4\") " pod="calico-system/csi-node-driver-5kjbz" Oct 30 00:01:52.400462 kubelet[2706]: E1030 00:01:52.400427 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.400462 kubelet[2706]: W1030 00:01:52.400444 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.400462 kubelet[2706]: E1030 00:01:52.400458 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.405225 kubelet[2706]: E1030 00:01:52.404682 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.405225 kubelet[2706]: W1030 00:01:52.404709 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.405225 kubelet[2706]: E1030 00:01:52.404734 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.408484 kubelet[2706]: E1030 00:01:52.408435 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.408484 kubelet[2706]: W1030 00:01:52.408471 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.408677 kubelet[2706]: E1030 00:01:52.408504 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.408677 kubelet[2706]: I1030 00:01:52.408547 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7c170e9a-cada-41cd-bd7c-14ab708f01d4-varrun\") pod \"csi-node-driver-5kjbz\" (UID: \"7c170e9a-cada-41cd-bd7c-14ab708f01d4\") " pod="calico-system/csi-node-driver-5kjbz" Oct 30 00:01:52.413368 kubelet[2706]: E1030 00:01:52.413314 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.413368 kubelet[2706]: W1030 00:01:52.413351 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.413368 kubelet[2706]: E1030 00:01:52.413380 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.413975 kubelet[2706]: E1030 00:01:52.413954 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.413975 kubelet[2706]: W1030 00:01:52.413974 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.414129 kubelet[2706]: E1030 00:01:52.413992 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.416495 kubelet[2706]: E1030 00:01:52.416462 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.416856 kubelet[2706]: W1030 00:01:52.416812 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.417385 kubelet[2706]: E1030 00:01:52.416982 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.420432 kubelet[2706]: E1030 00:01:52.418402 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.420432 kubelet[2706]: W1030 00:01:52.418425 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.420432 kubelet[2706]: E1030 00:01:52.418456 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.433449 systemd[1]: Started cri-containerd-b7d80fa3eda210354ea59041d6bc79a26425b073f366e66fe87758f49c9b818d.scope - libcontainer container b7d80fa3eda210354ea59041d6bc79a26425b073f366e66fe87758f49c9b818d. Oct 30 00:01:52.513442 kubelet[2706]: E1030 00:01:52.512990 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.513442 kubelet[2706]: W1030 00:01:52.513027 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.513442 kubelet[2706]: E1030 00:01:52.513171 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.514147 kubelet[2706]: E1030 00:01:52.513653 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.514147 kubelet[2706]: W1030 00:01:52.513672 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.514147 kubelet[2706]: E1030 00:01:52.513693 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.514147 kubelet[2706]: E1030 00:01:52.513887 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.514147 kubelet[2706]: W1030 00:01:52.513895 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.514147 kubelet[2706]: E1030 00:01:52.513904 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.514147 kubelet[2706]: E1030 00:01:52.514083 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.514147 kubelet[2706]: W1030 00:01:52.514096 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.514147 kubelet[2706]: E1030 00:01:52.514105 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.523159 kubelet[2706]: E1030 00:01:52.514335 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.523159 kubelet[2706]: W1030 00:01:52.514344 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.523159 kubelet[2706]: E1030 00:01:52.514353 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.523159 kubelet[2706]: E1030 00:01:52.514533 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.523159 kubelet[2706]: W1030 00:01:52.514540 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.523159 kubelet[2706]: E1030 00:01:52.514548 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.523159 kubelet[2706]: E1030 00:01:52.514754 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.523159 kubelet[2706]: W1030 00:01:52.514761 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.523159 kubelet[2706]: E1030 00:01:52.514770 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.523159 kubelet[2706]: E1030 00:01:52.515003 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.527860 kubelet[2706]: W1030 00:01:52.515018 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.527860 kubelet[2706]: E1030 00:01:52.515037 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.527860 kubelet[2706]: E1030 00:01:52.515258 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.527860 kubelet[2706]: W1030 00:01:52.515267 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.527860 kubelet[2706]: E1030 00:01:52.515275 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.527860 kubelet[2706]: E1030 00:01:52.516328 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.527860 kubelet[2706]: W1030 00:01:52.516358 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.527860 kubelet[2706]: E1030 00:01:52.516376 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.527860 kubelet[2706]: E1030 00:01:52.516637 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.527860 kubelet[2706]: W1030 00:01:52.516650 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.528364 kubelet[2706]: E1030 00:01:52.516666 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.528364 kubelet[2706]: E1030 00:01:52.516940 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.528364 kubelet[2706]: W1030 00:01:52.516952 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.528364 kubelet[2706]: E1030 00:01:52.516967 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.528364 kubelet[2706]: E1030 00:01:52.517293 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.528364 kubelet[2706]: W1030 00:01:52.517307 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.528364 kubelet[2706]: E1030 00:01:52.517322 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.528364 kubelet[2706]: E1030 00:01:52.518333 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.528364 kubelet[2706]: W1030 00:01:52.518349 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.528364 kubelet[2706]: E1030 00:01:52.518365 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.528870 kubelet[2706]: E1030 00:01:52.518580 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.528870 kubelet[2706]: W1030 00:01:52.518589 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.528870 kubelet[2706]: E1030 00:01:52.518598 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.528870 kubelet[2706]: E1030 00:01:52.518751 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.528870 kubelet[2706]: W1030 00:01:52.518758 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.528870 kubelet[2706]: E1030 00:01:52.518765 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.528870 kubelet[2706]: E1030 00:01:52.518949 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.528870 kubelet[2706]: W1030 00:01:52.518964 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.528870 kubelet[2706]: E1030 00:01:52.518980 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.528870 kubelet[2706]: E1030 00:01:52.519272 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.535293 kubelet[2706]: W1030 00:01:52.519282 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.535293 kubelet[2706]: E1030 00:01:52.519294 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.535293 kubelet[2706]: E1030 00:01:52.521323 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.535293 kubelet[2706]: W1030 00:01:52.521340 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.535293 kubelet[2706]: E1030 00:01:52.521359 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.535293 kubelet[2706]: E1030 00:01:52.522258 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.535293 kubelet[2706]: W1030 00:01:52.522270 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.535293 kubelet[2706]: E1030 00:01:52.522283 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.535293 kubelet[2706]: E1030 00:01:52.522474 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.535293 kubelet[2706]: W1030 00:01:52.522485 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.545342 kubelet[2706]: E1030 00:01:52.522495 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.545342 kubelet[2706]: E1030 00:01:52.522721 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.545342 kubelet[2706]: W1030 00:01:52.522730 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.545342 kubelet[2706]: E1030 00:01:52.522740 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.545342 kubelet[2706]: E1030 00:01:52.522909 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.545342 kubelet[2706]: W1030 00:01:52.522921 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.545342 kubelet[2706]: E1030 00:01:52.522933 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.545342 kubelet[2706]: E1030 00:01:52.523146 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.545342 kubelet[2706]: W1030 00:01:52.523154 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.545342 kubelet[2706]: E1030 00:01:52.523164 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.545934 kubelet[2706]: E1030 00:01:52.523353 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.545934 kubelet[2706]: W1030 00:01:52.523361 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.545934 kubelet[2706]: E1030 00:01:52.523369 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.552147 kubelet[2706]: E1030 00:01:52.550657 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:52.552147 kubelet[2706]: W1030 00:01:52.550683 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:52.552147 kubelet[2706]: E1030 00:01:52.550711 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:52.691386 containerd[1550]: time="2025-10-30T00:01:52.691312545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qtjwx,Uid:74d201a6-79d9-4221-8b7a-94bb58c20ced,Namespace:calico-system,Attempt:0,} returns sandbox id \"b7d80fa3eda210354ea59041d6bc79a26425b073f366e66fe87758f49c9b818d\"" Oct 30 00:01:52.693072 kubelet[2706]: E1030 00:01:52.693029 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:01:53.899493 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1209909245.mount: Deactivated successfully. Oct 30 00:01:54.419237 kubelet[2706]: E1030 00:01:54.418844 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5kjbz" podUID="7c170e9a-cada-41cd-bd7c-14ab708f01d4" Oct 30 00:01:54.953643 containerd[1550]: time="2025-10-30T00:01:54.952910630Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:01:54.954267 containerd[1550]: time="2025-10-30T00:01:54.954236159Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Oct 30 00:01:54.955549 containerd[1550]: time="2025-10-30T00:01:54.955508131Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:01:54.960724 containerd[1550]: time="2025-10-30T00:01:54.960662267Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.724688914s" Oct 30 00:01:54.960865 containerd[1550]: time="2025-10-30T00:01:54.960734693Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Oct 30 00:01:54.961403 containerd[1550]: time="2025-10-30T00:01:54.961360015Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:01:54.962911 containerd[1550]: time="2025-10-30T00:01:54.962870587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 30 00:01:54.984917 containerd[1550]: time="2025-10-30T00:01:54.984849427Z" level=info msg="CreateContainer within sandbox \"128c3a0f6560272da5c624e7562ca8b7f7b2004ab3e57967164c1c8e761ce448\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 30 00:01:54.994110 containerd[1550]: time="2025-10-30T00:01:54.991968676Z" level=info msg="Container 925d01ac95fd43a71eadf281c6744a40ef58d135130c6b22516ea48e3eb20b9e: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:01:55.002279 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1196153492.mount: Deactivated successfully. Oct 30 00:01:55.016117 containerd[1550]: time="2025-10-30T00:01:55.015226216Z" level=info msg="CreateContainer within sandbox \"128c3a0f6560272da5c624e7562ca8b7f7b2004ab3e57967164c1c8e761ce448\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"925d01ac95fd43a71eadf281c6744a40ef58d135130c6b22516ea48e3eb20b9e\"" Oct 30 00:01:55.020003 containerd[1550]: time="2025-10-30T00:01:55.019948861Z" level=info msg="StartContainer for \"925d01ac95fd43a71eadf281c6744a40ef58d135130c6b22516ea48e3eb20b9e\"" Oct 30 00:01:55.024140 containerd[1550]: time="2025-10-30T00:01:55.024058500Z" level=info msg="connecting to shim 925d01ac95fd43a71eadf281c6744a40ef58d135130c6b22516ea48e3eb20b9e" address="unix:///run/containerd/s/17be13cc95b9f3fe8b49908d81ebfa5e53dc4be70db849812ffc8240d49ae17d" protocol=ttrpc version=3 Oct 30 00:01:55.054395 systemd[1]: Started cri-containerd-925d01ac95fd43a71eadf281c6744a40ef58d135130c6b22516ea48e3eb20b9e.scope - libcontainer container 925d01ac95fd43a71eadf281c6744a40ef58d135130c6b22516ea48e3eb20b9e. Oct 30 00:01:55.127582 containerd[1550]: time="2025-10-30T00:01:55.127531578Z" level=info msg="StartContainer for \"925d01ac95fd43a71eadf281c6744a40ef58d135130c6b22516ea48e3eb20b9e\" returns successfully" Oct 30 00:01:55.587617 kubelet[2706]: E1030 00:01:55.587564 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:01:55.597547 kubelet[2706]: E1030 00:01:55.597509 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:55.597547 kubelet[2706]: W1030 00:01:55.597534 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:55.597547 kubelet[2706]: E1030 00:01:55.597557 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:55.597783 kubelet[2706]: E1030 00:01:55.597743 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:55.597783 kubelet[2706]: W1030 00:01:55.597749 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:55.597783 kubelet[2706]: E1030 00:01:55.597758 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:55.597942 kubelet[2706]: E1030 00:01:55.597919 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:55.597979 kubelet[2706]: W1030 00:01:55.597939 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:55.598007 kubelet[2706]: E1030 00:01:55.597981 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:55.598256 kubelet[2706]: E1030 00:01:55.598233 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:55.598256 kubelet[2706]: W1030 00:01:55.598253 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:55.598334 kubelet[2706]: E1030 00:01:55.598264 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:55.599520 kubelet[2706]: E1030 00:01:55.599492 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:55.599520 kubelet[2706]: W1030 00:01:55.599512 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:55.599650 kubelet[2706]: E1030 00:01:55.599530 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:55.601468 kubelet[2706]: E1030 00:01:55.601427 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:55.601468 kubelet[2706]: W1030 00:01:55.601448 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:55.601468 kubelet[2706]: E1030 00:01:55.601466 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:55.601682 kubelet[2706]: E1030 00:01:55.601670 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:55.601682 kubelet[2706]: W1030 00:01:55.601679 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:55.601749 kubelet[2706]: E1030 00:01:55.601690 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:55.602184 kubelet[2706]: E1030 00:01:55.602146 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:55.602280 kubelet[2706]: W1030 00:01:55.602262 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:55.602280 kubelet[2706]: E1030 00:01:55.602278 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:55.602672 kubelet[2706]: E1030 00:01:55.602650 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:55.602672 kubelet[2706]: W1030 00:01:55.602662 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:55.602672 kubelet[2706]: E1030 00:01:55.602672 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:55.604001 kubelet[2706]: E1030 00:01:55.603528 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:55.604001 kubelet[2706]: W1030 00:01:55.603543 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:55.604001 kubelet[2706]: E1030 00:01:55.603562 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:55.604154 kubelet[2706]: E1030 00:01:55.604032 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:55.604154 kubelet[2706]: W1030 00:01:55.604042 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:55.604154 kubelet[2706]: E1030 00:01:55.604053 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:55.604272 kubelet[2706]: E1030 00:01:55.604258 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:55.604272 kubelet[2706]: W1030 00:01:55.604269 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:55.604332 kubelet[2706]: E1030 00:01:55.604278 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:55.604646 kubelet[2706]: E1030 00:01:55.604628 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:55.604646 kubelet[2706]: W1030 00:01:55.604642 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:55.604727 kubelet[2706]: E1030 00:01:55.604652 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:55.604911 kubelet[2706]: E1030 00:01:55.604896 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:55.604911 kubelet[2706]: W1030 00:01:55.604908 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:55.606182 kubelet[2706]: E1030 00:01:55.606140 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:55.606418 kubelet[2706]: E1030 00:01:55.606400 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:55.606418 kubelet[2706]: W1030 00:01:55.606413 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:55.606504 kubelet[2706]: E1030 00:01:55.606456 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:55.620722 kubelet[2706]: I1030 00:01:55.619171 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-b64d78964-zhn6k" podStartSLOduration=1.892279898 podStartE2EDuration="4.619147951s" podCreationTimestamp="2025-10-30 00:01:51 +0000 UTC" firstStartedPulling="2025-10-30 00:01:52.235479688 +0000 UTC m=+23.981275458" lastFinishedPulling="2025-10-30 00:01:54.96234774 +0000 UTC m=+26.708143511" observedRunningTime="2025-10-30 00:01:55.617298356 +0000 UTC m=+27.363094134" watchObservedRunningTime="2025-10-30 00:01:55.619147951 +0000 UTC m=+27.364943728" Oct 30 00:01:55.639424 kubelet[2706]: E1030 00:01:55.639376 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:55.639424 kubelet[2706]: W1030 00:01:55.639422 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:55.640286 kubelet[2706]: E1030 00:01:55.639449 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:55.640286 kubelet[2706]: E1030 00:01:55.639784 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:55.640286 kubelet[2706]: W1030 00:01:55.639796 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:55.640286 kubelet[2706]: E1030 00:01:55.639806 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:55.640286 kubelet[2706]: E1030 00:01:55.640033 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:55.640286 kubelet[2706]: W1030 00:01:55.640041 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:55.640286 kubelet[2706]: E1030 00:01:55.640053 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:55.640541 kubelet[2706]: E1030 00:01:55.640306 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:55.640541 kubelet[2706]: W1030 00:01:55.640318 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:55.640541 kubelet[2706]: E1030 00:01:55.640339 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:55.645070 kubelet[2706]: E1030 00:01:55.645026 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:55.645070 kubelet[2706]: W1030 00:01:55.645056 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:55.645279 kubelet[2706]: E1030 00:01:55.645115 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:55.646239 kubelet[2706]: E1030 00:01:55.646210 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:55.646239 kubelet[2706]: W1030 00:01:55.646234 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:55.646375 kubelet[2706]: E1030 00:01:55.646256 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:55.646498 kubelet[2706]: E1030 00:01:55.646485 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:55.646498 kubelet[2706]: W1030 00:01:55.646496 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:55.646555 kubelet[2706]: E1030 00:01:55.646506 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:55.646923 kubelet[2706]: E1030 00:01:55.646905 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:55.646923 kubelet[2706]: W1030 00:01:55.646920 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:55.646990 kubelet[2706]: E1030 00:01:55.646932 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:55.647891 kubelet[2706]: E1030 00:01:55.647865 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:55.647891 kubelet[2706]: W1030 00:01:55.647883 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:55.647985 kubelet[2706]: E1030 00:01:55.647897 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:55.648121 kubelet[2706]: E1030 00:01:55.648108 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:55.648121 kubelet[2706]: W1030 00:01:55.648119 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:55.648211 kubelet[2706]: E1030 00:01:55.648129 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:55.648350 kubelet[2706]: E1030 00:01:55.648334 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:55.648350 kubelet[2706]: W1030 00:01:55.648348 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:55.648444 kubelet[2706]: E1030 00:01:55.648361 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:55.649172 kubelet[2706]: E1030 00:01:55.649149 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:55.649172 kubelet[2706]: W1030 00:01:55.649167 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:55.649274 kubelet[2706]: E1030 00:01:55.649180 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:55.649429 kubelet[2706]: E1030 00:01:55.649415 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:55.649429 kubelet[2706]: W1030 00:01:55.649427 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:55.649485 kubelet[2706]: E1030 00:01:55.649436 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:55.651367 kubelet[2706]: E1030 00:01:55.651335 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:55.651367 kubelet[2706]: W1030 00:01:55.651354 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:55.651367 kubelet[2706]: E1030 00:01:55.651370 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:55.651614 kubelet[2706]: E1030 00:01:55.651571 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:55.651671 kubelet[2706]: W1030 00:01:55.651620 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:55.651671 kubelet[2706]: E1030 00:01:55.651631 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:55.651889 kubelet[2706]: E1030 00:01:55.651868 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:55.651889 kubelet[2706]: W1030 00:01:55.651883 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:55.651974 kubelet[2706]: E1030 00:01:55.651894 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:55.653340 kubelet[2706]: E1030 00:01:55.653309 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:55.653340 kubelet[2706]: W1030 00:01:55.653332 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:55.653461 kubelet[2706]: E1030 00:01:55.653349 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:55.654327 kubelet[2706]: E1030 00:01:55.654305 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:01:55.654327 kubelet[2706]: W1030 00:01:55.654321 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:01:55.654414 kubelet[2706]: E1030 00:01:55.654333 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:01:56.420390 kubelet[2706]: E1030 00:01:56.419115 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5kjbz" podUID="7c170e9a-cada-41cd-bd7c-14ab708f01d4" Oct 30 00:01:56.431712 containerd[1550]: time="2025-10-30T00:01:56.431656933Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:01:56.432654 containerd[1550]: time="2025-10-30T00:01:56.432619181Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Oct 30 00:01:56.433351 containerd[1550]: time="2025-10-30T00:01:56.433311507Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:01:56.435549 containerd[1550]: time="2025-10-30T00:01:56.435517190Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:01:56.436363 containerd[1550]: time="2025-10-30T00:01:56.436334215Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.47342275s" Oct 30 00:01:56.436472 containerd[1550]: time="2025-10-30T00:01:56.436458340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Oct 30 00:01:56.442766 containerd[1550]: time="2025-10-30T00:01:56.442702360Z" level=info msg="CreateContainer within sandbox \"b7d80fa3eda210354ea59041d6bc79a26425b073f366e66fe87758f49c9b818d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 30 00:01:56.452153 containerd[1550]: time="2025-10-30T00:01:56.452095846Z" level=info msg="Container 867162d3aeec50141c91952014eadd2646fef0f96c71a60a5517831f0909553e: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:01:56.463460 containerd[1550]: time="2025-10-30T00:01:56.463323915Z" level=info msg="CreateContainer within sandbox \"b7d80fa3eda210354ea59041d6bc79a26425b073f366e66fe87758f49c9b818d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"867162d3aeec50141c91952014eadd2646fef0f96c71a60a5517831f0909553e\"" Oct 30 00:01:56.465684 containerd[1550]: time="2025-10-30T00:01:56.465645312Z" level=info msg="StartContainer for \"867162d3aeec50141c91952014eadd2646fef0f96c71a60a5517831f0909553e\"" Oct 30 00:01:56.468093 containerd[1550]: time="2025-10-30T00:01:56.468038072Z" level=info msg="connecting to shim 867162d3aeec50141c91952014eadd2646fef0f96c71a60a5517831f0909553e" address="unix:///run/containerd/s/0c25e700e6f410bd03e0a7c8337e79f3e34fb35cffe8399ea438cec42820fa8a" protocol=ttrpc version=3 Oct 30 00:01:56.510350 systemd[1]: Started cri-containerd-867162d3aeec50141c91952014eadd2646fef0f96c71a60a5517831f0909553e.scope - libcontainer container 867162d3aeec50141c91952014eadd2646fef0f96c71a60a5517831f0909553e. Oct 30 00:01:56.566025 containerd[1550]: time="2025-10-30T00:01:56.565973142Z" level=info msg="StartContainer for \"867162d3aeec50141c91952014eadd2646fef0f96c71a60a5517831f0909553e\" returns successfully" Oct 30 00:01:56.580619 systemd[1]: cri-containerd-867162d3aeec50141c91952014eadd2646fef0f96c71a60a5517831f0909553e.scope: Deactivated successfully. Oct 30 00:01:56.595713 kubelet[2706]: E1030 00:01:56.593597 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:01:56.595713 kubelet[2706]: E1030 00:01:56.593804 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:01:56.618609 containerd[1550]: time="2025-10-30T00:01:56.618543083Z" level=info msg="received exit event container_id:\"867162d3aeec50141c91952014eadd2646fef0f96c71a60a5517831f0909553e\" id:\"867162d3aeec50141c91952014eadd2646fef0f96c71a60a5517831f0909553e\" pid:3432 exited_at:{seconds:1761782516 nanos:590270944}" Oct 30 00:01:56.665566 containerd[1550]: time="2025-10-30T00:01:56.665337265Z" level=info msg="TaskExit event in podsandbox handler container_id:\"867162d3aeec50141c91952014eadd2646fef0f96c71a60a5517831f0909553e\" id:\"867162d3aeec50141c91952014eadd2646fef0f96c71a60a5517831f0909553e\" pid:3432 exited_at:{seconds:1761782516 nanos:590270944}" Oct 30 00:01:56.683390 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-867162d3aeec50141c91952014eadd2646fef0f96c71a60a5517831f0909553e-rootfs.mount: Deactivated successfully. Oct 30 00:01:57.597817 kubelet[2706]: E1030 00:01:57.597778 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:01:57.598892 kubelet[2706]: E1030 00:01:57.598865 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:01:57.601916 containerd[1550]: time="2025-10-30T00:01:57.601329934Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 30 00:01:58.419804 kubelet[2706]: E1030 00:01:58.418835 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5kjbz" podUID="7c170e9a-cada-41cd-bd7c-14ab708f01d4" Oct 30 00:02:00.431811 kubelet[2706]: E1030 00:02:00.431344 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5kjbz" podUID="7c170e9a-cada-41cd-bd7c-14ab708f01d4" Oct 30 00:02:02.419691 kubelet[2706]: E1030 00:02:02.419616 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5kjbz" podUID="7c170e9a-cada-41cd-bd7c-14ab708f01d4" Oct 30 00:02:03.166495 containerd[1550]: time="2025-10-30T00:02:03.166415508Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:03.167672 containerd[1550]: time="2025-10-30T00:02:03.167617627Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Oct 30 00:02:03.168978 containerd[1550]: time="2025-10-30T00:02:03.168376229Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:03.187654 containerd[1550]: time="2025-10-30T00:02:03.187586006Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:03.188854 containerd[1550]: time="2025-10-30T00:02:03.188789077Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 5.587410574s" Oct 30 00:02:03.188854 containerd[1550]: time="2025-10-30T00:02:03.188850355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Oct 30 00:02:03.196005 containerd[1550]: time="2025-10-30T00:02:03.195898456Z" level=info msg="CreateContainer within sandbox \"b7d80fa3eda210354ea59041d6bc79a26425b073f366e66fe87758f49c9b818d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 30 00:02:03.211142 containerd[1550]: time="2025-10-30T00:02:03.208537679Z" level=info msg="Container e5c2d7d0d8c127d2892efbf0a3577a98fa6ec0a5ec4ab990b3c5fff62e043ba4: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:02:03.219703 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4145576953.mount: Deactivated successfully. Oct 30 00:02:03.232473 containerd[1550]: time="2025-10-30T00:02:03.232375516Z" level=info msg="CreateContainer within sandbox \"b7d80fa3eda210354ea59041d6bc79a26425b073f366e66fe87758f49c9b818d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e5c2d7d0d8c127d2892efbf0a3577a98fa6ec0a5ec4ab990b3c5fff62e043ba4\"" Oct 30 00:02:03.234386 containerd[1550]: time="2025-10-30T00:02:03.234277919Z" level=info msg="StartContainer for \"e5c2d7d0d8c127d2892efbf0a3577a98fa6ec0a5ec4ab990b3c5fff62e043ba4\"" Oct 30 00:02:03.237271 containerd[1550]: time="2025-10-30T00:02:03.237149708Z" level=info msg="connecting to shim e5c2d7d0d8c127d2892efbf0a3577a98fa6ec0a5ec4ab990b3c5fff62e043ba4" address="unix:///run/containerd/s/0c25e700e6f410bd03e0a7c8337e79f3e34fb35cffe8399ea438cec42820fa8a" protocol=ttrpc version=3 Oct 30 00:02:03.268378 systemd[1]: Started cri-containerd-e5c2d7d0d8c127d2892efbf0a3577a98fa6ec0a5ec4ab990b3c5fff62e043ba4.scope - libcontainer container e5c2d7d0d8c127d2892efbf0a3577a98fa6ec0a5ec4ab990b3c5fff62e043ba4. Oct 30 00:02:03.358735 containerd[1550]: time="2025-10-30T00:02:03.358680785Z" level=info msg="StartContainer for \"e5c2d7d0d8c127d2892efbf0a3577a98fa6ec0a5ec4ab990b3c5fff62e043ba4\" returns successfully" Oct 30 00:02:03.628648 kubelet[2706]: E1030 00:02:03.628472 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:02:04.134957 systemd[1]: cri-containerd-e5c2d7d0d8c127d2892efbf0a3577a98fa6ec0a5ec4ab990b3c5fff62e043ba4.scope: Deactivated successfully. Oct 30 00:02:04.135357 systemd[1]: cri-containerd-e5c2d7d0d8c127d2892efbf0a3577a98fa6ec0a5ec4ab990b3c5fff62e043ba4.scope: Consumed 762ms CPU time, 162.7M memory peak, 9.8M read from disk, 171.3M written to disk. Oct 30 00:02:04.139968 containerd[1550]: time="2025-10-30T00:02:04.139756301Z" level=info msg="received exit event container_id:\"e5c2d7d0d8c127d2892efbf0a3577a98fa6ec0a5ec4ab990b3c5fff62e043ba4\" id:\"e5c2d7d0d8c127d2892efbf0a3577a98fa6ec0a5ec4ab990b3c5fff62e043ba4\" pid:3489 exited_at:{seconds:1761782524 nanos:139385793}" Oct 30 00:02:04.141533 containerd[1550]: time="2025-10-30T00:02:04.140870047Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e5c2d7d0d8c127d2892efbf0a3577a98fa6ec0a5ec4ab990b3c5fff62e043ba4\" id:\"e5c2d7d0d8c127d2892efbf0a3577a98fa6ec0a5ec4ab990b3c5fff62e043ba4\" pid:3489 exited_at:{seconds:1761782524 nanos:139385793}" Oct 30 00:02:04.212012 kubelet[2706]: I1030 00:02:04.211984 2706 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 30 00:02:04.248943 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5c2d7d0d8c127d2892efbf0a3577a98fa6ec0a5ec4ab990b3c5fff62e043ba4-rootfs.mount: Deactivated successfully. Oct 30 00:02:04.326838 systemd[1]: Created slice kubepods-burstable-pod6438ff5b_9f37_46ca_8d85_8b4404df8eee.slice - libcontainer container kubepods-burstable-pod6438ff5b_9f37_46ca_8d85_8b4404df8eee.slice. Oct 30 00:02:04.367763 systemd[1]: Created slice kubepods-burstable-podc2d1fca0_3477_43db_ba6a_f3fce7a4c843.slice - libcontainer container kubepods-burstable-podc2d1fca0_3477_43db_ba6a_f3fce7a4c843.slice. Oct 30 00:02:04.382685 systemd[1]: Created slice kubepods-besteffort-podcec1812d_1d23_4eb5_97a0_0be1dbecc779.slice - libcontainer container kubepods-besteffort-podcec1812d_1d23_4eb5_97a0_0be1dbecc779.slice. Oct 30 00:02:04.394719 systemd[1]: Created slice kubepods-besteffort-pod040b195d_6ed8_46f6_9e09_7aab95a4cc1d.slice - libcontainer container kubepods-besteffort-pod040b195d_6ed8_46f6_9e09_7aab95a4cc1d.slice. Oct 30 00:02:04.408866 systemd[1]: Created slice kubepods-besteffort-pod9a28bd97_c5f6_423b_9db0_587ee8384ffe.slice - libcontainer container kubepods-besteffort-pod9a28bd97_c5f6_423b_9db0_587ee8384ffe.slice. Oct 30 00:02:04.411639 kubelet[2706]: I1030 00:02:04.411484 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3d001f0c-0d5b-4f3a-ac85-24b0d4c18012-calico-apiserver-certs\") pod \"calico-apiserver-74894d766f-zrklx\" (UID: \"3d001f0c-0d5b-4f3a-ac85-24b0d4c18012\") " pod="calico-apiserver/calico-apiserver-74894d766f-zrklx" Oct 30 00:02:04.411639 kubelet[2706]: I1030 00:02:04.411543 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5khnm\" (UniqueName: \"kubernetes.io/projected/6438ff5b-9f37-46ca-8d85-8b4404df8eee-kube-api-access-5khnm\") pod \"coredns-674b8bbfcf-hl8c2\" (UID: \"6438ff5b-9f37-46ca-8d85-8b4404df8eee\") " pod="kube-system/coredns-674b8bbfcf-hl8c2" Oct 30 00:02:04.411639 kubelet[2706]: I1030 00:02:04.411568 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdhtt\" (UniqueName: \"kubernetes.io/projected/c2d1fca0-3477-43db-ba6a-f3fce7a4c843-kube-api-access-mdhtt\") pod \"coredns-674b8bbfcf-76jv2\" (UID: \"c2d1fca0-3477-43db-ba6a-f3fce7a4c843\") " pod="kube-system/coredns-674b8bbfcf-76jv2" Oct 30 00:02:04.411639 kubelet[2706]: I1030 00:02:04.411596 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6438ff5b-9f37-46ca-8d85-8b4404df8eee-config-volume\") pod \"coredns-674b8bbfcf-hl8c2\" (UID: \"6438ff5b-9f37-46ca-8d85-8b4404df8eee\") " pod="kube-system/coredns-674b8bbfcf-hl8c2" Oct 30 00:02:04.411967 kubelet[2706]: I1030 00:02:04.411877 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/640bc622-db68-4e4b-a017-7a6f5994fc43-goldmane-ca-bundle\") pod \"goldmane-666569f655-4nhcg\" (UID: \"640bc622-db68-4e4b-a017-7a6f5994fc43\") " pod="calico-system/goldmane-666569f655-4nhcg" Oct 30 00:02:04.411967 kubelet[2706]: I1030 00:02:04.411922 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4j6w\" (UniqueName: \"kubernetes.io/projected/640bc622-db68-4e4b-a017-7a6f5994fc43-kube-api-access-m4j6w\") pod \"goldmane-666569f655-4nhcg\" (UID: \"640bc622-db68-4e4b-a017-7a6f5994fc43\") " pod="calico-system/goldmane-666569f655-4nhcg" Oct 30 00:02:04.411967 kubelet[2706]: I1030 00:02:04.411947 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cec1812d-1d23-4eb5-97a0-0be1dbecc779-whisker-ca-bundle\") pod \"whisker-7555c5d9b5-vtzd5\" (UID: \"cec1812d-1d23-4eb5-97a0-0be1dbecc779\") " pod="calico-system/whisker-7555c5d9b5-vtzd5" Oct 30 00:02:04.412113 kubelet[2706]: I1030 00:02:04.411980 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/640bc622-db68-4e4b-a017-7a6f5994fc43-config\") pod \"goldmane-666569f655-4nhcg\" (UID: \"640bc622-db68-4e4b-a017-7a6f5994fc43\") " pod="calico-system/goldmane-666569f655-4nhcg" Oct 30 00:02:04.412113 kubelet[2706]: I1030 00:02:04.412020 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ckzb\" (UniqueName: \"kubernetes.io/projected/3d001f0c-0d5b-4f3a-ac85-24b0d4c18012-kube-api-access-5ckzb\") pod \"calico-apiserver-74894d766f-zrklx\" (UID: \"3d001f0c-0d5b-4f3a-ac85-24b0d4c18012\") " pod="calico-apiserver/calico-apiserver-74894d766f-zrklx" Oct 30 00:02:04.412113 kubelet[2706]: I1030 00:02:04.412046 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rqfc\" (UniqueName: \"kubernetes.io/projected/040b195d-6ed8-46f6-9e09-7aab95a4cc1d-kube-api-access-2rqfc\") pod \"calico-apiserver-74894d766f-gm29g\" (UID: \"040b195d-6ed8-46f6-9e09-7aab95a4cc1d\") " pod="calico-apiserver/calico-apiserver-74894d766f-gm29g" Oct 30 00:02:04.412113 kubelet[2706]: I1030 00:02:04.412090 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c2d1fca0-3477-43db-ba6a-f3fce7a4c843-config-volume\") pod \"coredns-674b8bbfcf-76jv2\" (UID: \"c2d1fca0-3477-43db-ba6a-f3fce7a4c843\") " pod="kube-system/coredns-674b8bbfcf-76jv2" Oct 30 00:02:04.412277 kubelet[2706]: I1030 00:02:04.412114 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a28bd97-c5f6-423b-9db0-587ee8384ffe-tigera-ca-bundle\") pod \"calico-kube-controllers-8cb7cb6c6-ml8rk\" (UID: \"9a28bd97-c5f6-423b-9db0-587ee8384ffe\") " pod="calico-system/calico-kube-controllers-8cb7cb6c6-ml8rk" Oct 30 00:02:04.412277 kubelet[2706]: I1030 00:02:04.412139 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cec1812d-1d23-4eb5-97a0-0be1dbecc779-whisker-backend-key-pair\") pod \"whisker-7555c5d9b5-vtzd5\" (UID: \"cec1812d-1d23-4eb5-97a0-0be1dbecc779\") " pod="calico-system/whisker-7555c5d9b5-vtzd5" Oct 30 00:02:04.412277 kubelet[2706]: I1030 00:02:04.412163 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/640bc622-db68-4e4b-a017-7a6f5994fc43-goldmane-key-pair\") pod \"goldmane-666569f655-4nhcg\" (UID: \"640bc622-db68-4e4b-a017-7a6f5994fc43\") " pod="calico-system/goldmane-666569f655-4nhcg" Oct 30 00:02:04.412277 kubelet[2706]: I1030 00:02:04.412187 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/040b195d-6ed8-46f6-9e09-7aab95a4cc1d-calico-apiserver-certs\") pod \"calico-apiserver-74894d766f-gm29g\" (UID: \"040b195d-6ed8-46f6-9e09-7aab95a4cc1d\") " pod="calico-apiserver/calico-apiserver-74894d766f-gm29g" Oct 30 00:02:04.412277 kubelet[2706]: I1030 00:02:04.412215 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zwvh\" (UniqueName: \"kubernetes.io/projected/9a28bd97-c5f6-423b-9db0-587ee8384ffe-kube-api-access-2zwvh\") pod \"calico-kube-controllers-8cb7cb6c6-ml8rk\" (UID: \"9a28bd97-c5f6-423b-9db0-587ee8384ffe\") " pod="calico-system/calico-kube-controllers-8cb7cb6c6-ml8rk" Oct 30 00:02:04.412403 kubelet[2706]: I1030 00:02:04.412240 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmkf6\" (UniqueName: \"kubernetes.io/projected/cec1812d-1d23-4eb5-97a0-0be1dbecc779-kube-api-access-rmkf6\") pod \"whisker-7555c5d9b5-vtzd5\" (UID: \"cec1812d-1d23-4eb5-97a0-0be1dbecc779\") " pod="calico-system/whisker-7555c5d9b5-vtzd5" Oct 30 00:02:04.422918 systemd[1]: Created slice kubepods-besteffort-pod3d001f0c_0d5b_4f3a_ac85_24b0d4c18012.slice - libcontainer container kubepods-besteffort-pod3d001f0c_0d5b_4f3a_ac85_24b0d4c18012.slice. Oct 30 00:02:04.432923 systemd[1]: Created slice kubepods-besteffort-pod640bc622_db68_4e4b_a017_7a6f5994fc43.slice - libcontainer container kubepods-besteffort-pod640bc622_db68_4e4b_a017_7a6f5994fc43.slice. Oct 30 00:02:04.446796 systemd[1]: Created slice kubepods-besteffort-pod7c170e9a_cada_41cd_bd7c_14ab708f01d4.slice - libcontainer container kubepods-besteffort-pod7c170e9a_cada_41cd_bd7c_14ab708f01d4.slice. Oct 30 00:02:04.453004 containerd[1550]: time="2025-10-30T00:02:04.452946836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5kjbz,Uid:7c170e9a-cada-41cd-bd7c-14ab708f01d4,Namespace:calico-system,Attempt:0,}" Oct 30 00:02:04.662686 kubelet[2706]: E1030 00:02:04.661956 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:02:04.677619 kubelet[2706]: E1030 00:02:04.676127 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:02:04.684904 kubelet[2706]: E1030 00:02:04.684132 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:02:04.685283 containerd[1550]: time="2025-10-30T00:02:04.684362467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hl8c2,Uid:6438ff5b-9f37-46ca-8d85-8b4404df8eee,Namespace:kube-system,Attempt:0,}" Oct 30 00:02:04.686996 containerd[1550]: time="2025-10-30T00:02:04.686666079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-76jv2,Uid:c2d1fca0-3477-43db-ba6a-f3fce7a4c843,Namespace:kube-system,Attempt:0,}" Oct 30 00:02:04.691214 containerd[1550]: time="2025-10-30T00:02:04.691165008Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 30 00:02:04.693913 containerd[1550]: time="2025-10-30T00:02:04.692055877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7555c5d9b5-vtzd5,Uid:cec1812d-1d23-4eb5-97a0-0be1dbecc779,Namespace:calico-system,Attempt:0,}" Oct 30 00:02:04.706455 containerd[1550]: time="2025-10-30T00:02:04.706406845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74894d766f-gm29g,Uid:040b195d-6ed8-46f6-9e09-7aab95a4cc1d,Namespace:calico-apiserver,Attempt:0,}" Oct 30 00:02:04.728210 containerd[1550]: time="2025-10-30T00:02:04.728165067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8cb7cb6c6-ml8rk,Uid:9a28bd97-c5f6-423b-9db0-587ee8384ffe,Namespace:calico-system,Attempt:0,}" Oct 30 00:02:04.747846 containerd[1550]: time="2025-10-30T00:02:04.747410033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-4nhcg,Uid:640bc622-db68-4e4b-a017-7a6f5994fc43,Namespace:calico-system,Attempt:0,}" Oct 30 00:02:04.764717 containerd[1550]: time="2025-10-30T00:02:04.764670174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74894d766f-zrklx,Uid:3d001f0c-0d5b-4f3a-ac85-24b0d4c18012,Namespace:calico-apiserver,Attempt:0,}" Oct 30 00:02:05.063959 containerd[1550]: time="2025-10-30T00:02:05.063883543Z" level=error msg="Failed to destroy network for sandbox \"b1cae1239019c94938c27b98180f59f21d0648b5a545bce06be34a6cf827aa40\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:02:05.070323 containerd[1550]: time="2025-10-30T00:02:05.070225036Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74894d766f-gm29g,Uid:040b195d-6ed8-46f6-9e09-7aab95a4cc1d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1cae1239019c94938c27b98180f59f21d0648b5a545bce06be34a6cf827aa40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:02:05.072668 kubelet[2706]: E1030 00:02:05.072352 2706 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1cae1239019c94938c27b98180f59f21d0648b5a545bce06be34a6cf827aa40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:02:05.073737 kubelet[2706]: E1030 00:02:05.072983 2706 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1cae1239019c94938c27b98180f59f21d0648b5a545bce06be34a6cf827aa40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74894d766f-gm29g" Oct 30 00:02:05.073737 kubelet[2706]: E1030 00:02:05.073568 2706 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1cae1239019c94938c27b98180f59f21d0648b5a545bce06be34a6cf827aa40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74894d766f-gm29g" Oct 30 00:02:05.074426 kubelet[2706]: E1030 00:02:05.074151 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74894d766f-gm29g_calico-apiserver(040b195d-6ed8-46f6-9e09-7aab95a4cc1d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74894d766f-gm29g_calico-apiserver(040b195d-6ed8-46f6-9e09-7aab95a4cc1d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b1cae1239019c94938c27b98180f59f21d0648b5a545bce06be34a6cf827aa40\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74894d766f-gm29g" podUID="040b195d-6ed8-46f6-9e09-7aab95a4cc1d" Oct 30 00:02:05.130820 containerd[1550]: time="2025-10-30T00:02:05.130623197Z" level=error msg="Failed to destroy network for sandbox \"34712f5a115c143c071470e07463099abd79f2b45d08fb56c3341527d468b76c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:02:05.139757 containerd[1550]: time="2025-10-30T00:02:05.138194996Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5kjbz,Uid:7c170e9a-cada-41cd-bd7c-14ab708f01d4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"34712f5a115c143c071470e07463099abd79f2b45d08fb56c3341527d468b76c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:02:05.140983 kubelet[2706]: E1030 00:02:05.140321 2706 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34712f5a115c143c071470e07463099abd79f2b45d08fb56c3341527d468b76c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:02:05.140983 kubelet[2706]: E1030 00:02:05.140414 2706 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34712f5a115c143c071470e07463099abd79f2b45d08fb56c3341527d468b76c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5kjbz" Oct 30 00:02:05.140983 kubelet[2706]: E1030 00:02:05.140447 2706 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34712f5a115c143c071470e07463099abd79f2b45d08fb56c3341527d468b76c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5kjbz" Oct 30 00:02:05.141410 kubelet[2706]: E1030 00:02:05.140615 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5kjbz_calico-system(7c170e9a-cada-41cd-bd7c-14ab708f01d4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5kjbz_calico-system(7c170e9a-cada-41cd-bd7c-14ab708f01d4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"34712f5a115c143c071470e07463099abd79f2b45d08fb56c3341527d468b76c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5kjbz" podUID="7c170e9a-cada-41cd-bd7c-14ab708f01d4" Oct 30 00:02:05.168203 containerd[1550]: time="2025-10-30T00:02:05.168115246Z" level=error msg="Failed to destroy network for sandbox \"b198a9e8a70e4450ef7ab21b1664cc190f31d61d1f4759439c705445588783a8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:02:05.172711 containerd[1550]: time="2025-10-30T00:02:05.172641022Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8cb7cb6c6-ml8rk,Uid:9a28bd97-c5f6-423b-9db0-587ee8384ffe,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b198a9e8a70e4450ef7ab21b1664cc190f31d61d1f4759439c705445588783a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:02:05.173775 kubelet[2706]: E1030 00:02:05.173690 2706 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b198a9e8a70e4450ef7ab21b1664cc190f31d61d1f4759439c705445588783a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:02:05.174071 kubelet[2706]: E1030 00:02:05.173881 2706 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b198a9e8a70e4450ef7ab21b1664cc190f31d61d1f4759439c705445588783a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8cb7cb6c6-ml8rk" Oct 30 00:02:05.174071 kubelet[2706]: E1030 00:02:05.174155 2706 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b198a9e8a70e4450ef7ab21b1664cc190f31d61d1f4759439c705445588783a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8cb7cb6c6-ml8rk" Oct 30 00:02:05.174679 kubelet[2706]: E1030 00:02:05.174436 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8cb7cb6c6-ml8rk_calico-system(9a28bd97-c5f6-423b-9db0-587ee8384ffe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8cb7cb6c6-ml8rk_calico-system(9a28bd97-c5f6-423b-9db0-587ee8384ffe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b198a9e8a70e4450ef7ab21b1664cc190f31d61d1f4759439c705445588783a8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8cb7cb6c6-ml8rk" podUID="9a28bd97-c5f6-423b-9db0-587ee8384ffe" Oct 30 00:02:05.200147 containerd[1550]: time="2025-10-30T00:02:05.200089366Z" level=error msg="Failed to destroy network for sandbox \"14ee73c52076450a93640e7a3362eefac0bca6085ac3fca02454402c0ce39095\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:02:05.203685 containerd[1550]: time="2025-10-30T00:02:05.203519575Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hl8c2,Uid:6438ff5b-9f37-46ca-8d85-8b4404df8eee,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"14ee73c52076450a93640e7a3362eefac0bca6085ac3fca02454402c0ce39095\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:02:05.204347 kubelet[2706]: E1030 00:02:05.204217 2706 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14ee73c52076450a93640e7a3362eefac0bca6085ac3fca02454402c0ce39095\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:02:05.204347 kubelet[2706]: E1030 00:02:05.204317 2706 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14ee73c52076450a93640e7a3362eefac0bca6085ac3fca02454402c0ce39095\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hl8c2" Oct 30 00:02:05.205255 kubelet[2706]: E1030 00:02:05.204500 2706 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14ee73c52076450a93640e7a3362eefac0bca6085ac3fca02454402c0ce39095\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hl8c2" Oct 30 00:02:05.205363 kubelet[2706]: E1030 00:02:05.204990 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-hl8c2_kube-system(6438ff5b-9f37-46ca-8d85-8b4404df8eee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-hl8c2_kube-system(6438ff5b-9f37-46ca-8d85-8b4404df8eee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"14ee73c52076450a93640e7a3362eefac0bca6085ac3fca02454402c0ce39095\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-hl8c2" podUID="6438ff5b-9f37-46ca-8d85-8b4404df8eee" Oct 30 00:02:05.215993 containerd[1550]: time="2025-10-30T00:02:05.215911008Z" level=error msg="Failed to destroy network for sandbox \"4e8e07ddaeda67fa64c6b198affd0121539509710f64cfac15d6e48498c6c82a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:02:05.217285 containerd[1550]: time="2025-10-30T00:02:05.217216247Z" level=error msg="Failed to destroy network for sandbox \"15da0e47e947a9eebccaeac2f42199fb5650e1091674123f6eb73e75bc1cc808\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:02:05.217999 containerd[1550]: time="2025-10-30T00:02:05.217818965Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-76jv2,Uid:c2d1fca0-3477-43db-ba6a-f3fce7a4c843,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e8e07ddaeda67fa64c6b198affd0121539509710f64cfac15d6e48498c6c82a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:02:05.218849 containerd[1550]: time="2025-10-30T00:02:05.218262308Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74894d766f-zrklx,Uid:3d001f0c-0d5b-4f3a-ac85-24b0d4c18012,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"15da0e47e947a9eebccaeac2f42199fb5650e1091674123f6eb73e75bc1cc808\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:02:05.219005 kubelet[2706]: E1030 00:02:05.218436 2706 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15da0e47e947a9eebccaeac2f42199fb5650e1091674123f6eb73e75bc1cc808\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:02:05.219005 kubelet[2706]: E1030 00:02:05.218447 2706 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e8e07ddaeda67fa64c6b198affd0121539509710f64cfac15d6e48498c6c82a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:02:05.219005 kubelet[2706]: E1030 00:02:05.218502 2706 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15da0e47e947a9eebccaeac2f42199fb5650e1091674123f6eb73e75bc1cc808\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74894d766f-zrklx" Oct 30 00:02:05.219005 kubelet[2706]: E1030 00:02:05.218510 2706 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e8e07ddaeda67fa64c6b198affd0121539509710f64cfac15d6e48498c6c82a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-76jv2" Oct 30 00:02:05.219221 kubelet[2706]: E1030 00:02:05.218542 2706 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e8e07ddaeda67fa64c6b198affd0121539509710f64cfac15d6e48498c6c82a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-76jv2" Oct 30 00:02:05.219221 kubelet[2706]: E1030 00:02:05.218562 2706 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15da0e47e947a9eebccaeac2f42199fb5650e1091674123f6eb73e75bc1cc808\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74894d766f-zrklx" Oct 30 00:02:05.219221 kubelet[2706]: E1030 00:02:05.218617 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-76jv2_kube-system(c2d1fca0-3477-43db-ba6a-f3fce7a4c843)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-76jv2_kube-system(c2d1fca0-3477-43db-ba6a-f3fce7a4c843)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4e8e07ddaeda67fa64c6b198affd0121539509710f64cfac15d6e48498c6c82a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-76jv2" podUID="c2d1fca0-3477-43db-ba6a-f3fce7a4c843" Oct 30 00:02:05.219378 kubelet[2706]: E1030 00:02:05.218646 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74894d766f-zrklx_calico-apiserver(3d001f0c-0d5b-4f3a-ac85-24b0d4c18012)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74894d766f-zrklx_calico-apiserver(3d001f0c-0d5b-4f3a-ac85-24b0d4c18012)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"15da0e47e947a9eebccaeac2f42199fb5650e1091674123f6eb73e75bc1cc808\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74894d766f-zrklx" podUID="3d001f0c-0d5b-4f3a-ac85-24b0d4c18012" Oct 30 00:02:05.227568 containerd[1550]: time="2025-10-30T00:02:05.227298998Z" level=error msg="Failed to destroy network for sandbox \"86dffe845b7bfed28b6090e5108218c29e619a4ef90e2af11c1b1044e7a859b7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:02:05.228600 containerd[1550]: time="2025-10-30T00:02:05.228477073Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7555c5d9b5-vtzd5,Uid:cec1812d-1d23-4eb5-97a0-0be1dbecc779,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"86dffe845b7bfed28b6090e5108218c29e619a4ef90e2af11c1b1044e7a859b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:02:05.231196 kubelet[2706]: E1030 00:02:05.231050 2706 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86dffe845b7bfed28b6090e5108218c29e619a4ef90e2af11c1b1044e7a859b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:02:05.231357 kubelet[2706]: E1030 00:02:05.231236 2706 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86dffe845b7bfed28b6090e5108218c29e619a4ef90e2af11c1b1044e7a859b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7555c5d9b5-vtzd5" Oct 30 00:02:05.231357 kubelet[2706]: E1030 00:02:05.231299 2706 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86dffe845b7bfed28b6090e5108218c29e619a4ef90e2af11c1b1044e7a859b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7555c5d9b5-vtzd5" Oct 30 00:02:05.233159 kubelet[2706]: E1030 00:02:05.231443 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7555c5d9b5-vtzd5_calico-system(cec1812d-1d23-4eb5-97a0-0be1dbecc779)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7555c5d9b5-vtzd5_calico-system(cec1812d-1d23-4eb5-97a0-0be1dbecc779)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"86dffe845b7bfed28b6090e5108218c29e619a4ef90e2af11c1b1044e7a859b7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7555c5d9b5-vtzd5" podUID="cec1812d-1d23-4eb5-97a0-0be1dbecc779" Oct 30 00:02:05.252132 containerd[1550]: time="2025-10-30T00:02:05.251704633Z" level=error msg="Failed to destroy network for sandbox \"2dca1ff62cdc1607480befde4c4d29681512ace708fdb4092d2ae66656cfd165\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:02:05.256628 containerd[1550]: time="2025-10-30T00:02:05.256554141Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-4nhcg,Uid:640bc622-db68-4e4b-a017-7a6f5994fc43,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dca1ff62cdc1607480befde4c4d29681512ace708fdb4092d2ae66656cfd165\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:02:05.258339 kubelet[2706]: E1030 00:02:05.258284 2706 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dca1ff62cdc1607480befde4c4d29681512ace708fdb4092d2ae66656cfd165\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:02:05.258593 kubelet[2706]: E1030 00:02:05.258563 2706 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dca1ff62cdc1607480befde4c4d29681512ace708fdb4092d2ae66656cfd165\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-4nhcg" Oct 30 00:02:05.258733 kubelet[2706]: E1030 00:02:05.258701 2706 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dca1ff62cdc1607480befde4c4d29681512ace708fdb4092d2ae66656cfd165\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-4nhcg" Oct 30 00:02:05.259259 kubelet[2706]: E1030 00:02:05.259193 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-4nhcg_calico-system(640bc622-db68-4e4b-a017-7a6f5994fc43)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-4nhcg_calico-system(640bc622-db68-4e4b-a017-7a6f5994fc43)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2dca1ff62cdc1607480befde4c4d29681512ace708fdb4092d2ae66656cfd165\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-4nhcg" podUID="640bc622-db68-4e4b-a017-7a6f5994fc43" Oct 30 00:02:05.271910 systemd[1]: run-netns-cni\x2d9e241e0d\x2ddcd3\x2db424\x2dd32f\x2da4b089500600.mount: Deactivated successfully. Oct 30 00:02:05.278505 systemd[1]: run-netns-cni\x2d3a7af3e2\x2d0b1c\x2d7878\x2d52c8\x2d2b037bfd2d2e.mount: Deactivated successfully. Oct 30 00:02:10.809737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount575775639.mount: Deactivated successfully. Oct 30 00:02:10.836525 containerd[1550]: time="2025-10-30T00:02:10.836456802Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:10.837513 containerd[1550]: time="2025-10-30T00:02:10.837272221Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Oct 30 00:02:10.838971 containerd[1550]: time="2025-10-30T00:02:10.837984204Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:10.839631 containerd[1550]: time="2025-10-30T00:02:10.839600421Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:10.840298 containerd[1550]: time="2025-10-30T00:02:10.840268672Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.146528205s" Oct 30 00:02:10.840392 containerd[1550]: time="2025-10-30T00:02:10.840378499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Oct 30 00:02:10.873802 containerd[1550]: time="2025-10-30T00:02:10.873684314Z" level=info msg="CreateContainer within sandbox \"b7d80fa3eda210354ea59041d6bc79a26425b073f366e66fe87758f49c9b818d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 30 00:02:10.885414 containerd[1550]: time="2025-10-30T00:02:10.885359194Z" level=info msg="Container bfd0a574ba0aa16712469af1cfc78a19606dfd59786bc095c4a04b304744f7f2: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:02:10.899530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3386108041.mount: Deactivated successfully. Oct 30 00:02:10.909814 containerd[1550]: time="2025-10-30T00:02:10.909520990Z" level=info msg="CreateContainer within sandbox \"b7d80fa3eda210354ea59041d6bc79a26425b073f366e66fe87758f49c9b818d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bfd0a574ba0aa16712469af1cfc78a19606dfd59786bc095c4a04b304744f7f2\"" Oct 30 00:02:10.911311 containerd[1550]: time="2025-10-30T00:02:10.911261426Z" level=info msg="StartContainer for \"bfd0a574ba0aa16712469af1cfc78a19606dfd59786bc095c4a04b304744f7f2\"" Oct 30 00:02:10.919975 containerd[1550]: time="2025-10-30T00:02:10.919839270Z" level=info msg="connecting to shim bfd0a574ba0aa16712469af1cfc78a19606dfd59786bc095c4a04b304744f7f2" address="unix:///run/containerd/s/0c25e700e6f410bd03e0a7c8337e79f3e34fb35cffe8399ea438cec42820fa8a" protocol=ttrpc version=3 Oct 30 00:02:11.096362 systemd[1]: Started cri-containerd-bfd0a574ba0aa16712469af1cfc78a19606dfd59786bc095c4a04b304744f7f2.scope - libcontainer container bfd0a574ba0aa16712469af1cfc78a19606dfd59786bc095c4a04b304744f7f2. Oct 30 00:02:11.150369 containerd[1550]: time="2025-10-30T00:02:11.150322421Z" level=info msg="StartContainer for \"bfd0a574ba0aa16712469af1cfc78a19606dfd59786bc095c4a04b304744f7f2\" returns successfully" Oct 30 00:02:11.262812 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 30 00:02:11.263901 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 30 00:02:11.569436 kubelet[2706]: I1030 00:02:11.568782 2706 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmkf6\" (UniqueName: \"kubernetes.io/projected/cec1812d-1d23-4eb5-97a0-0be1dbecc779-kube-api-access-rmkf6\") pod \"cec1812d-1d23-4eb5-97a0-0be1dbecc779\" (UID: \"cec1812d-1d23-4eb5-97a0-0be1dbecc779\") " Oct 30 00:02:11.569436 kubelet[2706]: I1030 00:02:11.569209 2706 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cec1812d-1d23-4eb5-97a0-0be1dbecc779-whisker-ca-bundle\") pod \"cec1812d-1d23-4eb5-97a0-0be1dbecc779\" (UID: \"cec1812d-1d23-4eb5-97a0-0be1dbecc779\") " Oct 30 00:02:11.569436 kubelet[2706]: I1030 00:02:11.569246 2706 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cec1812d-1d23-4eb5-97a0-0be1dbecc779-whisker-backend-key-pair\") pod \"cec1812d-1d23-4eb5-97a0-0be1dbecc779\" (UID: \"cec1812d-1d23-4eb5-97a0-0be1dbecc779\") " Oct 30 00:02:11.574392 kubelet[2706]: I1030 00:02:11.574329 2706 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cec1812d-1d23-4eb5-97a0-0be1dbecc779-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "cec1812d-1d23-4eb5-97a0-0be1dbecc779" (UID: "cec1812d-1d23-4eb5-97a0-0be1dbecc779"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 30 00:02:11.581295 kubelet[2706]: I1030 00:02:11.581226 2706 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cec1812d-1d23-4eb5-97a0-0be1dbecc779-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "cec1812d-1d23-4eb5-97a0-0be1dbecc779" (UID: "cec1812d-1d23-4eb5-97a0-0be1dbecc779"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 30 00:02:11.581647 kubelet[2706]: I1030 00:02:11.581602 2706 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cec1812d-1d23-4eb5-97a0-0be1dbecc779-kube-api-access-rmkf6" (OuterVolumeSpecName: "kube-api-access-rmkf6") pod "cec1812d-1d23-4eb5-97a0-0be1dbecc779" (UID: "cec1812d-1d23-4eb5-97a0-0be1dbecc779"). InnerVolumeSpecName "kube-api-access-rmkf6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 30 00:02:11.670704 kubelet[2706]: I1030 00:02:11.670653 2706 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rmkf6\" (UniqueName: \"kubernetes.io/projected/cec1812d-1d23-4eb5-97a0-0be1dbecc779-kube-api-access-rmkf6\") on node \"ci-4459.1.0-n-705ef66fdc\" DevicePath \"\"" Oct 30 00:02:11.670704 kubelet[2706]: I1030 00:02:11.670690 2706 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cec1812d-1d23-4eb5-97a0-0be1dbecc779-whisker-ca-bundle\") on node \"ci-4459.1.0-n-705ef66fdc\" DevicePath \"\"" Oct 30 00:02:11.670704 kubelet[2706]: I1030 00:02:11.670707 2706 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cec1812d-1d23-4eb5-97a0-0be1dbecc779-whisker-backend-key-pair\") on node \"ci-4459.1.0-n-705ef66fdc\" DevicePath \"\"" Oct 30 00:02:11.720844 kubelet[2706]: E1030 00:02:11.720722 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:02:11.727256 systemd[1]: Removed slice kubepods-besteffort-podcec1812d_1d23_4eb5_97a0_0be1dbecc779.slice - libcontainer container kubepods-besteffort-podcec1812d_1d23_4eb5_97a0_0be1dbecc779.slice. Oct 30 00:02:11.773125 kubelet[2706]: I1030 00:02:11.772068 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-qtjwx" podStartSLOduration=2.625481651 podStartE2EDuration="20.772048899s" podCreationTimestamp="2025-10-30 00:01:51 +0000 UTC" firstStartedPulling="2025-10-30 00:01:52.694690605 +0000 UTC m=+24.440486372" lastFinishedPulling="2025-10-30 00:02:10.841257863 +0000 UTC m=+42.587053620" observedRunningTime="2025-10-30 00:02:11.769741556 +0000 UTC m=+43.515537349" watchObservedRunningTime="2025-10-30 00:02:11.772048899 +0000 UTC m=+43.517844676" Oct 30 00:02:11.813139 systemd[1]: var-lib-kubelet-pods-cec1812d\x2d1d23\x2d4eb5\x2d97a0\x2d0be1dbecc779-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drmkf6.mount: Deactivated successfully. Oct 30 00:02:11.813843 systemd[1]: var-lib-kubelet-pods-cec1812d\x2d1d23\x2d4eb5\x2d97a0\x2d0be1dbecc779-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 30 00:02:11.860761 systemd[1]: Created slice kubepods-besteffort-pod1d51aa8e_d9c7_44fd_8bd6_4ee88452e454.slice - libcontainer container kubepods-besteffort-pod1d51aa8e_d9c7_44fd_8bd6_4ee88452e454.slice. Oct 30 00:02:11.973385 kubelet[2706]: I1030 00:02:11.973314 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcfvx\" (UniqueName: \"kubernetes.io/projected/1d51aa8e-d9c7-44fd-8bd6-4ee88452e454-kube-api-access-zcfvx\") pod \"whisker-6b8bf69cc7-2jwmd\" (UID: \"1d51aa8e-d9c7-44fd-8bd6-4ee88452e454\") " pod="calico-system/whisker-6b8bf69cc7-2jwmd" Oct 30 00:02:11.973640 kubelet[2706]: I1030 00:02:11.973562 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1d51aa8e-d9c7-44fd-8bd6-4ee88452e454-whisker-backend-key-pair\") pod \"whisker-6b8bf69cc7-2jwmd\" (UID: \"1d51aa8e-d9c7-44fd-8bd6-4ee88452e454\") " pod="calico-system/whisker-6b8bf69cc7-2jwmd" Oct 30 00:02:11.973640 kubelet[2706]: I1030 00:02:11.973592 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d51aa8e-d9c7-44fd-8bd6-4ee88452e454-whisker-ca-bundle\") pod \"whisker-6b8bf69cc7-2jwmd\" (UID: \"1d51aa8e-d9c7-44fd-8bd6-4ee88452e454\") " pod="calico-system/whisker-6b8bf69cc7-2jwmd" Oct 30 00:02:12.169946 containerd[1550]: time="2025-10-30T00:02:12.169410389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b8bf69cc7-2jwmd,Uid:1d51aa8e-d9c7-44fd-8bd6-4ee88452e454,Namespace:calico-system,Attempt:0,}" Oct 30 00:02:12.424860 kubelet[2706]: I1030 00:02:12.424434 2706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cec1812d-1d23-4eb5-97a0-0be1dbecc779" path="/var/lib/kubelet/pods/cec1812d-1d23-4eb5-97a0-0be1dbecc779/volumes" Oct 30 00:02:12.684400 systemd-networkd[1423]: calidad9ddfdda5: Link UP Oct 30 00:02:12.686664 systemd-networkd[1423]: calidad9ddfdda5: Gained carrier Oct 30 00:02:12.706118 containerd[1550]: 2025-10-30 00:02:12.365 [INFO][3821] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 30 00:02:12.706118 containerd[1550]: 2025-10-30 00:02:12.399 [INFO][3821] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--705ef66fdc-k8s-whisker--6b8bf69cc7--2jwmd-eth0 whisker-6b8bf69cc7- calico-system 1d51aa8e-d9c7-44fd-8bd6-4ee88452e454 926 0 2025-10-30 00:02:11 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6b8bf69cc7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459.1.0-n-705ef66fdc whisker-6b8bf69cc7-2jwmd eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calidad9ddfdda5 [] [] }} ContainerID="837fc0c95e1755abf9c8100d0dd71f140ebc53d3e7d0dde6f02df4fa9678c554" Namespace="calico-system" Pod="whisker-6b8bf69cc7-2jwmd" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-whisker--6b8bf69cc7--2jwmd-" Oct 30 00:02:12.706118 containerd[1550]: 2025-10-30 00:02:12.399 [INFO][3821] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="837fc0c95e1755abf9c8100d0dd71f140ebc53d3e7d0dde6f02df4fa9678c554" Namespace="calico-system" Pod="whisker-6b8bf69cc7-2jwmd" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-whisker--6b8bf69cc7--2jwmd-eth0" Oct 30 00:02:12.706118 containerd[1550]: 2025-10-30 00:02:12.574 [INFO][3828] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="837fc0c95e1755abf9c8100d0dd71f140ebc53d3e7d0dde6f02df4fa9678c554" HandleID="k8s-pod-network.837fc0c95e1755abf9c8100d0dd71f140ebc53d3e7d0dde6f02df4fa9678c554" Workload="ci--4459.1.0--n--705ef66fdc-k8s-whisker--6b8bf69cc7--2jwmd-eth0" Oct 30 00:02:12.706381 containerd[1550]: 2025-10-30 00:02:12.577 [INFO][3828] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="837fc0c95e1755abf9c8100d0dd71f140ebc53d3e7d0dde6f02df4fa9678c554" HandleID="k8s-pod-network.837fc0c95e1755abf9c8100d0dd71f140ebc53d3e7d0dde6f02df4fa9678c554" Workload="ci--4459.1.0--n--705ef66fdc-k8s-whisker--6b8bf69cc7--2jwmd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e5e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.1.0-n-705ef66fdc", "pod":"whisker-6b8bf69cc7-2jwmd", "timestamp":"2025-10-30 00:02:12.574332867 +0000 UTC"}, Hostname:"ci-4459.1.0-n-705ef66fdc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:02:12.706381 containerd[1550]: 2025-10-30 00:02:12.577 [INFO][3828] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:02:12.706381 containerd[1550]: 2025-10-30 00:02:12.577 [INFO][3828] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:02:12.706381 containerd[1550]: 2025-10-30 00:02:12.578 [INFO][3828] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-705ef66fdc' Oct 30 00:02:12.706381 containerd[1550]: 2025-10-30 00:02:12.604 [INFO][3828] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.837fc0c95e1755abf9c8100d0dd71f140ebc53d3e7d0dde6f02df4fa9678c554" host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:12.706381 containerd[1550]: 2025-10-30 00:02:12.619 [INFO][3828] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:12.706381 containerd[1550]: 2025-10-30 00:02:12.626 [INFO][3828] ipam/ipam.go 511: Trying affinity for 192.168.70.128/26 host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:12.706381 containerd[1550]: 2025-10-30 00:02:12.630 [INFO][3828] ipam/ipam.go 158: Attempting to load block cidr=192.168.70.128/26 host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:12.706381 containerd[1550]: 2025-10-30 00:02:12.636 [INFO][3828] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.70.128/26 host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:12.706620 containerd[1550]: 2025-10-30 00:02:12.636 [INFO][3828] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.70.128/26 handle="k8s-pod-network.837fc0c95e1755abf9c8100d0dd71f140ebc53d3e7d0dde6f02df4fa9678c554" host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:12.706620 containerd[1550]: 2025-10-30 00:02:12.641 [INFO][3828] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.837fc0c95e1755abf9c8100d0dd71f140ebc53d3e7d0dde6f02df4fa9678c554 Oct 30 00:02:12.706620 containerd[1550]: 2025-10-30 00:02:12.651 [INFO][3828] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.70.128/26 handle="k8s-pod-network.837fc0c95e1755abf9c8100d0dd71f140ebc53d3e7d0dde6f02df4fa9678c554" host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:12.706620 containerd[1550]: 2025-10-30 00:02:12.660 [INFO][3828] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.70.129/26] block=192.168.70.128/26 handle="k8s-pod-network.837fc0c95e1755abf9c8100d0dd71f140ebc53d3e7d0dde6f02df4fa9678c554" host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:12.706620 containerd[1550]: 2025-10-30 00:02:12.660 [INFO][3828] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.70.129/26] handle="k8s-pod-network.837fc0c95e1755abf9c8100d0dd71f140ebc53d3e7d0dde6f02df4fa9678c554" host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:12.706620 containerd[1550]: 2025-10-30 00:02:12.660 [INFO][3828] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:02:12.706620 containerd[1550]: 2025-10-30 00:02:12.660 [INFO][3828] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.70.129/26] IPv6=[] ContainerID="837fc0c95e1755abf9c8100d0dd71f140ebc53d3e7d0dde6f02df4fa9678c554" HandleID="k8s-pod-network.837fc0c95e1755abf9c8100d0dd71f140ebc53d3e7d0dde6f02df4fa9678c554" Workload="ci--4459.1.0--n--705ef66fdc-k8s-whisker--6b8bf69cc7--2jwmd-eth0" Oct 30 00:02:12.706894 containerd[1550]: 2025-10-30 00:02:12.665 [INFO][3821] cni-plugin/k8s.go 418: Populated endpoint ContainerID="837fc0c95e1755abf9c8100d0dd71f140ebc53d3e7d0dde6f02df4fa9678c554" Namespace="calico-system" Pod="whisker-6b8bf69cc7-2jwmd" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-whisker--6b8bf69cc7--2jwmd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--705ef66fdc-k8s-whisker--6b8bf69cc7--2jwmd-eth0", GenerateName:"whisker-6b8bf69cc7-", Namespace:"calico-system", SelfLink:"", UID:"1d51aa8e-d9c7-44fd-8bd6-4ee88452e454", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 2, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6b8bf69cc7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-705ef66fdc", ContainerID:"", Pod:"whisker-6b8bf69cc7-2jwmd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.70.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidad9ddfdda5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:02:12.706894 containerd[1550]: 2025-10-30 00:02:12.665 [INFO][3821] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.70.129/32] ContainerID="837fc0c95e1755abf9c8100d0dd71f140ebc53d3e7d0dde6f02df4fa9678c554" Namespace="calico-system" Pod="whisker-6b8bf69cc7-2jwmd" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-whisker--6b8bf69cc7--2jwmd-eth0" Oct 30 00:02:12.707040 containerd[1550]: 2025-10-30 00:02:12.665 [INFO][3821] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidad9ddfdda5 ContainerID="837fc0c95e1755abf9c8100d0dd71f140ebc53d3e7d0dde6f02df4fa9678c554" Namespace="calico-system" Pod="whisker-6b8bf69cc7-2jwmd" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-whisker--6b8bf69cc7--2jwmd-eth0" Oct 30 00:02:12.707040 containerd[1550]: 2025-10-30 00:02:12.680 [INFO][3821] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="837fc0c95e1755abf9c8100d0dd71f140ebc53d3e7d0dde6f02df4fa9678c554" Namespace="calico-system" Pod="whisker-6b8bf69cc7-2jwmd" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-whisker--6b8bf69cc7--2jwmd-eth0" Oct 30 00:02:12.708045 containerd[1550]: 2025-10-30 00:02:12.680 [INFO][3821] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="837fc0c95e1755abf9c8100d0dd71f140ebc53d3e7d0dde6f02df4fa9678c554" Namespace="calico-system" Pod="whisker-6b8bf69cc7-2jwmd" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-whisker--6b8bf69cc7--2jwmd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--705ef66fdc-k8s-whisker--6b8bf69cc7--2jwmd-eth0", GenerateName:"whisker-6b8bf69cc7-", Namespace:"calico-system", SelfLink:"", UID:"1d51aa8e-d9c7-44fd-8bd6-4ee88452e454", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 2, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6b8bf69cc7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-705ef66fdc", ContainerID:"837fc0c95e1755abf9c8100d0dd71f140ebc53d3e7d0dde6f02df4fa9678c554", Pod:"whisker-6b8bf69cc7-2jwmd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.70.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidad9ddfdda5", MAC:"b2:cb:85:0b:d8:d5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:02:12.708456 containerd[1550]: 2025-10-30 00:02:12.700 [INFO][3821] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="837fc0c95e1755abf9c8100d0dd71f140ebc53d3e7d0dde6f02df4fa9678c554" Namespace="calico-system" Pod="whisker-6b8bf69cc7-2jwmd" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-whisker--6b8bf69cc7--2jwmd-eth0" Oct 30 00:02:12.719287 kubelet[2706]: I1030 00:02:12.719149 2706 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 30 00:02:12.721823 kubelet[2706]: E1030 00:02:12.720605 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:02:13.340753 containerd[1550]: time="2025-10-30T00:02:13.340552008Z" level=info msg="connecting to shim 837fc0c95e1755abf9c8100d0dd71f140ebc53d3e7d0dde6f02df4fa9678c554" address="unix:///run/containerd/s/c026dd5627b8bbd4ea1743ef01ad0baa05bb2677a09dce27fbed440233256d2b" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:02:13.412491 systemd[1]: Started cri-containerd-837fc0c95e1755abf9c8100d0dd71f140ebc53d3e7d0dde6f02df4fa9678c554.scope - libcontainer container 837fc0c95e1755abf9c8100d0dd71f140ebc53d3e7d0dde6f02df4fa9678c554. Oct 30 00:02:13.598210 containerd[1550]: time="2025-10-30T00:02:13.597587802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b8bf69cc7-2jwmd,Uid:1d51aa8e-d9c7-44fd-8bd6-4ee88452e454,Namespace:calico-system,Attempt:0,} returns sandbox id \"837fc0c95e1755abf9c8100d0dd71f140ebc53d3e7d0dde6f02df4fa9678c554\"" Oct 30 00:02:13.603281 containerd[1550]: time="2025-10-30T00:02:13.603205530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 30 00:02:13.728114 kubelet[2706]: I1030 00:02:13.727481 2706 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 30 00:02:13.731427 kubelet[2706]: E1030 00:02:13.731119 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:02:13.909624 containerd[1550]: time="2025-10-30T00:02:13.909030106Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bfd0a574ba0aa16712469af1cfc78a19606dfd59786bc095c4a04b304744f7f2\" id:\"1ab7ccd2f5bc1f105af2696a484824d1cfaa0cb76d414178380054817e7eb78c\" pid:3942 exit_status:1 exited_at:{seconds:1761782533 nanos:867512563}" Oct 30 00:02:14.062340 containerd[1550]: time="2025-10-30T00:02:14.061659858Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bfd0a574ba0aa16712469af1cfc78a19606dfd59786bc095c4a04b304744f7f2\" id:\"95525c35b08b25a119c6ca6951de56be19dfbdf2a756e4f0ad8383b9aa567f69\" pid:4039 exit_status:1 exited_at:{seconds:1761782534 nanos:61326318}" Oct 30 00:02:14.072322 systemd-networkd[1423]: calidad9ddfdda5: Gained IPv6LL Oct 30 00:02:14.132131 containerd[1550]: time="2025-10-30T00:02:14.131933356Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:02:14.142522 containerd[1550]: time="2025-10-30T00:02:14.133809886Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 30 00:02:14.143010 containerd[1550]: time="2025-10-30T00:02:14.135548000Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 30 00:02:14.153249 kubelet[2706]: E1030 00:02:14.153190 2706 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 00:02:14.153249 kubelet[2706]: E1030 00:02:14.153253 2706 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 00:02:14.158909 kubelet[2706]: E1030 00:02:14.158814 2706 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:928f80f085f744a79c4bc50cfb2d04dd,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zcfvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6b8bf69cc7-2jwmd_calico-system(1d51aa8e-d9c7-44fd-8bd6-4ee88452e454): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 30 00:02:14.162559 containerd[1550]: time="2025-10-30T00:02:14.162203038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 30 00:02:14.399530 systemd-networkd[1423]: vxlan.calico: Link UP Oct 30 00:02:14.399568 systemd-networkd[1423]: vxlan.calico: Gained carrier Oct 30 00:02:14.553983 containerd[1550]: time="2025-10-30T00:02:14.553901219Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:02:14.554813 containerd[1550]: time="2025-10-30T00:02:14.554767978Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 30 00:02:14.558193 containerd[1550]: time="2025-10-30T00:02:14.558125216Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 30 00:02:14.558600 kubelet[2706]: E1030 00:02:14.558557 2706 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 00:02:14.560186 kubelet[2706]: E1030 00:02:14.558868 2706 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 00:02:14.561014 kubelet[2706]: E1030 00:02:14.560915 2706 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zcfvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6b8bf69cc7-2jwmd_calico-system(1d51aa8e-d9c7-44fd-8bd6-4ee88452e454): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 30 00:02:14.562471 kubelet[2706]: E1030 00:02:14.562409 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b8bf69cc7-2jwmd" podUID="1d51aa8e-d9c7-44fd-8bd6-4ee88452e454" Oct 30 00:02:14.734945 kubelet[2706]: E1030 00:02:14.734875 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b8bf69cc7-2jwmd" podUID="1d51aa8e-d9c7-44fd-8bd6-4ee88452e454" Oct 30 00:02:15.927292 systemd-networkd[1423]: vxlan.calico: Gained IPv6LL Oct 30 00:02:16.420037 kubelet[2706]: E1030 00:02:16.419687 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:02:16.420881 containerd[1550]: time="2025-10-30T00:02:16.420832830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hl8c2,Uid:6438ff5b-9f37-46ca-8d85-8b4404df8eee,Namespace:kube-system,Attempt:0,}" Oct 30 00:02:16.586903 systemd-networkd[1423]: cali3b51019f083: Link UP Oct 30 00:02:16.588734 systemd-networkd[1423]: cali3b51019f083: Gained carrier Oct 30 00:02:16.610130 containerd[1550]: 2025-10-30 00:02:16.472 [INFO][4132] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--705ef66fdc-k8s-coredns--674b8bbfcf--hl8c2-eth0 coredns-674b8bbfcf- kube-system 6438ff5b-9f37-46ca-8d85-8b4404df8eee 847 0 2025-10-30 00:01:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.1.0-n-705ef66fdc coredns-674b8bbfcf-hl8c2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3b51019f083 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9cd9c11e12414418b7ba789887057aff3ecf8655da03e4cced450a3937c9e85c" Namespace="kube-system" Pod="coredns-674b8bbfcf-hl8c2" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-coredns--674b8bbfcf--hl8c2-" Oct 30 00:02:16.610130 containerd[1550]: 2025-10-30 00:02:16.472 [INFO][4132] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9cd9c11e12414418b7ba789887057aff3ecf8655da03e4cced450a3937c9e85c" Namespace="kube-system" Pod="coredns-674b8bbfcf-hl8c2" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-coredns--674b8bbfcf--hl8c2-eth0" Oct 30 00:02:16.610130 containerd[1550]: 2025-10-30 00:02:16.522 [INFO][4143] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9cd9c11e12414418b7ba789887057aff3ecf8655da03e4cced450a3937c9e85c" HandleID="k8s-pod-network.9cd9c11e12414418b7ba789887057aff3ecf8655da03e4cced450a3937c9e85c" Workload="ci--4459.1.0--n--705ef66fdc-k8s-coredns--674b8bbfcf--hl8c2-eth0" Oct 30 00:02:16.610476 containerd[1550]: 2025-10-30 00:02:16.522 [INFO][4143] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9cd9c11e12414418b7ba789887057aff3ecf8655da03e4cced450a3937c9e85c" HandleID="k8s-pod-network.9cd9c11e12414418b7ba789887057aff3ecf8655da03e4cced450a3937c9e85c" Workload="ci--4459.1.0--n--705ef66fdc-k8s-coredns--674b8bbfcf--hl8c2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5000), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.1.0-n-705ef66fdc", "pod":"coredns-674b8bbfcf-hl8c2", "timestamp":"2025-10-30 00:02:16.52273351 +0000 UTC"}, Hostname:"ci-4459.1.0-n-705ef66fdc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:02:16.610476 containerd[1550]: 2025-10-30 00:02:16.523 [INFO][4143] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:02:16.610476 containerd[1550]: 2025-10-30 00:02:16.523 [INFO][4143] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:02:16.610476 containerd[1550]: 2025-10-30 00:02:16.523 [INFO][4143] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-705ef66fdc' Oct 30 00:02:16.610476 containerd[1550]: 2025-10-30 00:02:16.533 [INFO][4143] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9cd9c11e12414418b7ba789887057aff3ecf8655da03e4cced450a3937c9e85c" host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:16.610476 containerd[1550]: 2025-10-30 00:02:16.540 [INFO][4143] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:16.610476 containerd[1550]: 2025-10-30 00:02:16.550 [INFO][4143] ipam/ipam.go 511: Trying affinity for 192.168.70.128/26 host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:16.610476 containerd[1550]: 2025-10-30 00:02:16.555 [INFO][4143] ipam/ipam.go 158: Attempting to load block cidr=192.168.70.128/26 host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:16.610476 containerd[1550]: 2025-10-30 00:02:16.558 [INFO][4143] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.70.128/26 host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:16.612248 containerd[1550]: 2025-10-30 00:02:16.558 [INFO][4143] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.70.128/26 handle="k8s-pod-network.9cd9c11e12414418b7ba789887057aff3ecf8655da03e4cced450a3937c9e85c" host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:16.612248 containerd[1550]: 2025-10-30 00:02:16.561 [INFO][4143] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9cd9c11e12414418b7ba789887057aff3ecf8655da03e4cced450a3937c9e85c Oct 30 00:02:16.612248 containerd[1550]: 2025-10-30 00:02:16.568 [INFO][4143] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.70.128/26 handle="k8s-pod-network.9cd9c11e12414418b7ba789887057aff3ecf8655da03e4cced450a3937c9e85c" host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:16.612248 containerd[1550]: 2025-10-30 00:02:16.576 [INFO][4143] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.70.130/26] block=192.168.70.128/26 handle="k8s-pod-network.9cd9c11e12414418b7ba789887057aff3ecf8655da03e4cced450a3937c9e85c" host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:16.612248 containerd[1550]: 2025-10-30 00:02:16.576 [INFO][4143] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.70.130/26] handle="k8s-pod-network.9cd9c11e12414418b7ba789887057aff3ecf8655da03e4cced450a3937c9e85c" host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:16.612248 containerd[1550]: 2025-10-30 00:02:16.576 [INFO][4143] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:02:16.612248 containerd[1550]: 2025-10-30 00:02:16.576 [INFO][4143] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.70.130/26] IPv6=[] ContainerID="9cd9c11e12414418b7ba789887057aff3ecf8655da03e4cced450a3937c9e85c" HandleID="k8s-pod-network.9cd9c11e12414418b7ba789887057aff3ecf8655da03e4cced450a3937c9e85c" Workload="ci--4459.1.0--n--705ef66fdc-k8s-coredns--674b8bbfcf--hl8c2-eth0" Oct 30 00:02:16.612432 containerd[1550]: 2025-10-30 00:02:16.580 [INFO][4132] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9cd9c11e12414418b7ba789887057aff3ecf8655da03e4cced450a3937c9e85c" Namespace="kube-system" Pod="coredns-674b8bbfcf-hl8c2" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-coredns--674b8bbfcf--hl8c2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--705ef66fdc-k8s-coredns--674b8bbfcf--hl8c2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6438ff5b-9f37-46ca-8d85-8b4404df8eee", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 1, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-705ef66fdc", ContainerID:"", Pod:"coredns-674b8bbfcf-hl8c2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.70.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3b51019f083", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:02:16.612432 containerd[1550]: 2025-10-30 00:02:16.581 [INFO][4132] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.70.130/32] ContainerID="9cd9c11e12414418b7ba789887057aff3ecf8655da03e4cced450a3937c9e85c" Namespace="kube-system" Pod="coredns-674b8bbfcf-hl8c2" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-coredns--674b8bbfcf--hl8c2-eth0" Oct 30 00:02:16.612432 containerd[1550]: 2025-10-30 00:02:16.581 [INFO][4132] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3b51019f083 ContainerID="9cd9c11e12414418b7ba789887057aff3ecf8655da03e4cced450a3937c9e85c" Namespace="kube-system" Pod="coredns-674b8bbfcf-hl8c2" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-coredns--674b8bbfcf--hl8c2-eth0" Oct 30 00:02:16.612432 containerd[1550]: 2025-10-30 00:02:16.589 [INFO][4132] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9cd9c11e12414418b7ba789887057aff3ecf8655da03e4cced450a3937c9e85c" Namespace="kube-system" Pod="coredns-674b8bbfcf-hl8c2" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-coredns--674b8bbfcf--hl8c2-eth0" Oct 30 00:02:16.612432 containerd[1550]: 2025-10-30 00:02:16.590 [INFO][4132] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9cd9c11e12414418b7ba789887057aff3ecf8655da03e4cced450a3937c9e85c" Namespace="kube-system" Pod="coredns-674b8bbfcf-hl8c2" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-coredns--674b8bbfcf--hl8c2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--705ef66fdc-k8s-coredns--674b8bbfcf--hl8c2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6438ff5b-9f37-46ca-8d85-8b4404df8eee", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 1, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-705ef66fdc", ContainerID:"9cd9c11e12414418b7ba789887057aff3ecf8655da03e4cced450a3937c9e85c", Pod:"coredns-674b8bbfcf-hl8c2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.70.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3b51019f083", MAC:"36:9c:28:d1:73:dc", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:02:16.612432 containerd[1550]: 2025-10-30 00:02:16.606 [INFO][4132] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9cd9c11e12414418b7ba789887057aff3ecf8655da03e4cced450a3937c9e85c" Namespace="kube-system" Pod="coredns-674b8bbfcf-hl8c2" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-coredns--674b8bbfcf--hl8c2-eth0" Oct 30 00:02:16.667629 containerd[1550]: time="2025-10-30T00:02:16.667523716Z" level=info msg="connecting to shim 9cd9c11e12414418b7ba789887057aff3ecf8655da03e4cced450a3937c9e85c" address="unix:///run/containerd/s/7ad933d406ccbbac20d169d724cdcb776d6276e42cb249cc80b91708728653a3" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:02:16.713591 systemd[1]: Started cri-containerd-9cd9c11e12414418b7ba789887057aff3ecf8655da03e4cced450a3937c9e85c.scope - libcontainer container 9cd9c11e12414418b7ba789887057aff3ecf8655da03e4cced450a3937c9e85c. Oct 30 00:02:16.794560 containerd[1550]: time="2025-10-30T00:02:16.794505454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hl8c2,Uid:6438ff5b-9f37-46ca-8d85-8b4404df8eee,Namespace:kube-system,Attempt:0,} returns sandbox id \"9cd9c11e12414418b7ba789887057aff3ecf8655da03e4cced450a3937c9e85c\"" Oct 30 00:02:16.795845 kubelet[2706]: E1030 00:02:16.795816 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:02:16.800720 containerd[1550]: time="2025-10-30T00:02:16.800672779Z" level=info msg="CreateContainer within sandbox \"9cd9c11e12414418b7ba789887057aff3ecf8655da03e4cced450a3937c9e85c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 30 00:02:16.817813 containerd[1550]: time="2025-10-30T00:02:16.817723384Z" level=info msg="Container 2cca879d93507b943e2d263792156555009b0caba8ba50165405f05715cdd778: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:02:16.821828 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount701320265.mount: Deactivated successfully. Oct 30 00:02:16.828849 containerd[1550]: time="2025-10-30T00:02:16.828780451Z" level=info msg="CreateContainer within sandbox \"9cd9c11e12414418b7ba789887057aff3ecf8655da03e4cced450a3937c9e85c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2cca879d93507b943e2d263792156555009b0caba8ba50165405f05715cdd778\"" Oct 30 00:02:16.832108 containerd[1550]: time="2025-10-30T00:02:16.831569539Z" level=info msg="StartContainer for \"2cca879d93507b943e2d263792156555009b0caba8ba50165405f05715cdd778\"" Oct 30 00:02:16.834455 containerd[1550]: time="2025-10-30T00:02:16.834407193Z" level=info msg="connecting to shim 2cca879d93507b943e2d263792156555009b0caba8ba50165405f05715cdd778" address="unix:///run/containerd/s/7ad933d406ccbbac20d169d724cdcb776d6276e42cb249cc80b91708728653a3" protocol=ttrpc version=3 Oct 30 00:02:16.855317 systemd[1]: Started cri-containerd-2cca879d93507b943e2d263792156555009b0caba8ba50165405f05715cdd778.scope - libcontainer container 2cca879d93507b943e2d263792156555009b0caba8ba50165405f05715cdd778. Oct 30 00:02:16.901903 containerd[1550]: time="2025-10-30T00:02:16.901741937Z" level=info msg="StartContainer for \"2cca879d93507b943e2d263792156555009b0caba8ba50165405f05715cdd778\" returns successfully" Oct 30 00:02:17.743278 kubelet[2706]: E1030 00:02:17.742902 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:02:17.778209 kubelet[2706]: I1030 00:02:17.778146 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-hl8c2" podStartSLOduration=43.778127957 podStartE2EDuration="43.778127957s" podCreationTimestamp="2025-10-30 00:01:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 00:02:17.760830318 +0000 UTC m=+49.506626095" watchObservedRunningTime="2025-10-30 00:02:17.778127957 +0000 UTC m=+49.523923729" Oct 30 00:02:18.421525 kubelet[2706]: E1030 00:02:18.421201 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:02:18.422047 containerd[1550]: time="2025-10-30T00:02:18.421478313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74894d766f-gm29g,Uid:040b195d-6ed8-46f6-9e09-7aab95a4cc1d,Namespace:calico-apiserver,Attempt:0,}" Oct 30 00:02:18.423192 containerd[1550]: time="2025-10-30T00:02:18.423065776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-4nhcg,Uid:640bc622-db68-4e4b-a017-7a6f5994fc43,Namespace:calico-system,Attempt:0,}" Oct 30 00:02:18.423548 containerd[1550]: time="2025-10-30T00:02:18.423522889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5kjbz,Uid:7c170e9a-cada-41cd-bd7c-14ab708f01d4,Namespace:calico-system,Attempt:0,}" Oct 30 00:02:18.424373 containerd[1550]: time="2025-10-30T00:02:18.423569691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-76jv2,Uid:c2d1fca0-3477-43db-ba6a-f3fce7a4c843,Namespace:kube-system,Attempt:0,}" Oct 30 00:02:18.553033 systemd-networkd[1423]: cali3b51019f083: Gained IPv6LL Oct 30 00:02:18.746396 kubelet[2706]: E1030 00:02:18.745533 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:02:18.837570 systemd-networkd[1423]: cali94028d7115a: Link UP Oct 30 00:02:18.840924 systemd-networkd[1423]: cali94028d7115a: Gained carrier Oct 30 00:02:18.871301 containerd[1550]: 2025-10-30 00:02:18.604 [INFO][4240] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--705ef66fdc-k8s-csi--node--driver--5kjbz-eth0 csi-node-driver- calico-system 7c170e9a-cada-41cd-bd7c-14ab708f01d4 728 0 2025-10-30 00:01:52 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459.1.0-n-705ef66fdc csi-node-driver-5kjbz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali94028d7115a [] [] }} ContainerID="4604e25cdd2afd487540b3eb5e301b12067c22a711f40696315efd0acd054138" Namespace="calico-system" Pod="csi-node-driver-5kjbz" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-csi--node--driver--5kjbz-" Oct 30 00:02:18.871301 containerd[1550]: 2025-10-30 00:02:18.604 [INFO][4240] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4604e25cdd2afd487540b3eb5e301b12067c22a711f40696315efd0acd054138" Namespace="calico-system" Pod="csi-node-driver-5kjbz" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-csi--node--driver--5kjbz-eth0" Oct 30 00:02:18.871301 containerd[1550]: 2025-10-30 00:02:18.705 [INFO][4289] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4604e25cdd2afd487540b3eb5e301b12067c22a711f40696315efd0acd054138" HandleID="k8s-pod-network.4604e25cdd2afd487540b3eb5e301b12067c22a711f40696315efd0acd054138" Workload="ci--4459.1.0--n--705ef66fdc-k8s-csi--node--driver--5kjbz-eth0" Oct 30 00:02:18.871301 containerd[1550]: 2025-10-30 00:02:18.708 [INFO][4289] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4604e25cdd2afd487540b3eb5e301b12067c22a711f40696315efd0acd054138" HandleID="k8s-pod-network.4604e25cdd2afd487540b3eb5e301b12067c22a711f40696315efd0acd054138" Workload="ci--4459.1.0--n--705ef66fdc-k8s-csi--node--driver--5kjbz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000103d50), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.1.0-n-705ef66fdc", "pod":"csi-node-driver-5kjbz", "timestamp":"2025-10-30 00:02:18.705623418 +0000 UTC"}, Hostname:"ci-4459.1.0-n-705ef66fdc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:02:18.871301 containerd[1550]: 2025-10-30 00:02:18.708 [INFO][4289] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:02:18.871301 containerd[1550]: 2025-10-30 00:02:18.708 [INFO][4289] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:02:18.871301 containerd[1550]: 2025-10-30 00:02:18.708 [INFO][4289] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-705ef66fdc' Oct 30 00:02:18.871301 containerd[1550]: 2025-10-30 00:02:18.739 [INFO][4289] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4604e25cdd2afd487540b3eb5e301b12067c22a711f40696315efd0acd054138" host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:18.871301 containerd[1550]: 2025-10-30 00:02:18.758 [INFO][4289] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:18.871301 containerd[1550]: 2025-10-30 00:02:18.790 [INFO][4289] ipam/ipam.go 511: Trying affinity for 192.168.70.128/26 host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:18.871301 containerd[1550]: 2025-10-30 00:02:18.798 [INFO][4289] ipam/ipam.go 158: Attempting to load block cidr=192.168.70.128/26 host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:18.871301 containerd[1550]: 2025-10-30 00:02:18.804 [INFO][4289] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.70.128/26 host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:18.871301 containerd[1550]: 2025-10-30 00:02:18.806 [INFO][4289] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.70.128/26 handle="k8s-pod-network.4604e25cdd2afd487540b3eb5e301b12067c22a711f40696315efd0acd054138" host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:18.871301 containerd[1550]: 2025-10-30 00:02:18.809 [INFO][4289] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4604e25cdd2afd487540b3eb5e301b12067c22a711f40696315efd0acd054138 Oct 30 00:02:18.871301 containerd[1550]: 2025-10-30 00:02:18.814 [INFO][4289] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.70.128/26 handle="k8s-pod-network.4604e25cdd2afd487540b3eb5e301b12067c22a711f40696315efd0acd054138" host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:18.871301 containerd[1550]: 2025-10-30 00:02:18.824 [INFO][4289] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.70.131/26] block=192.168.70.128/26 handle="k8s-pod-network.4604e25cdd2afd487540b3eb5e301b12067c22a711f40696315efd0acd054138" host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:18.871301 containerd[1550]: 2025-10-30 00:02:18.824 [INFO][4289] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.70.131/26] handle="k8s-pod-network.4604e25cdd2afd487540b3eb5e301b12067c22a711f40696315efd0acd054138" host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:18.871301 containerd[1550]: 2025-10-30 00:02:18.824 [INFO][4289] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:02:18.871301 containerd[1550]: 2025-10-30 00:02:18.824 [INFO][4289] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.70.131/26] IPv6=[] ContainerID="4604e25cdd2afd487540b3eb5e301b12067c22a711f40696315efd0acd054138" HandleID="k8s-pod-network.4604e25cdd2afd487540b3eb5e301b12067c22a711f40696315efd0acd054138" Workload="ci--4459.1.0--n--705ef66fdc-k8s-csi--node--driver--5kjbz-eth0" Oct 30 00:02:18.873474 containerd[1550]: 2025-10-30 00:02:18.832 [INFO][4240] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4604e25cdd2afd487540b3eb5e301b12067c22a711f40696315efd0acd054138" Namespace="calico-system" Pod="csi-node-driver-5kjbz" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-csi--node--driver--5kjbz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--705ef66fdc-k8s-csi--node--driver--5kjbz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7c170e9a-cada-41cd-bd7c-14ab708f01d4", ResourceVersion:"728", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 1, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-705ef66fdc", ContainerID:"", Pod:"csi-node-driver-5kjbz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.70.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali94028d7115a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:02:18.873474 containerd[1550]: 2025-10-30 00:02:18.833 [INFO][4240] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.70.131/32] ContainerID="4604e25cdd2afd487540b3eb5e301b12067c22a711f40696315efd0acd054138" Namespace="calico-system" Pod="csi-node-driver-5kjbz" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-csi--node--driver--5kjbz-eth0" Oct 30 00:02:18.873474 containerd[1550]: 2025-10-30 00:02:18.833 [INFO][4240] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali94028d7115a ContainerID="4604e25cdd2afd487540b3eb5e301b12067c22a711f40696315efd0acd054138" Namespace="calico-system" Pod="csi-node-driver-5kjbz" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-csi--node--driver--5kjbz-eth0" Oct 30 00:02:18.873474 containerd[1550]: 2025-10-30 00:02:18.842 [INFO][4240] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4604e25cdd2afd487540b3eb5e301b12067c22a711f40696315efd0acd054138" Namespace="calico-system" Pod="csi-node-driver-5kjbz" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-csi--node--driver--5kjbz-eth0" Oct 30 00:02:18.873474 containerd[1550]: 2025-10-30 00:02:18.843 [INFO][4240] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4604e25cdd2afd487540b3eb5e301b12067c22a711f40696315efd0acd054138" Namespace="calico-system" Pod="csi-node-driver-5kjbz" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-csi--node--driver--5kjbz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--705ef66fdc-k8s-csi--node--driver--5kjbz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7c170e9a-cada-41cd-bd7c-14ab708f01d4", ResourceVersion:"728", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 1, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-705ef66fdc", ContainerID:"4604e25cdd2afd487540b3eb5e301b12067c22a711f40696315efd0acd054138", Pod:"csi-node-driver-5kjbz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.70.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali94028d7115a", MAC:"62:0e:b5:ce:b4:7f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:02:18.873474 containerd[1550]: 2025-10-30 00:02:18.864 [INFO][4240] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4604e25cdd2afd487540b3eb5e301b12067c22a711f40696315efd0acd054138" Namespace="calico-system" Pod="csi-node-driver-5kjbz" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-csi--node--driver--5kjbz-eth0" Oct 30 00:02:18.931999 containerd[1550]: time="2025-10-30T00:02:18.931945638Z" level=info msg="connecting to shim 4604e25cdd2afd487540b3eb5e301b12067c22a711f40696315efd0acd054138" address="unix:///run/containerd/s/be0ee6b16f5f3c1270ecded27d600a1645ae653995e359026e02162efb8fd919" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:02:18.964296 systemd-networkd[1423]: cali68f0667c2bd: Link UP Oct 30 00:02:18.966290 systemd-networkd[1423]: cali68f0667c2bd: Gained carrier Oct 30 00:02:19.022083 containerd[1550]: 2025-10-30 00:02:18.616 [INFO][4259] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--705ef66fdc-k8s-coredns--674b8bbfcf--76jv2-eth0 coredns-674b8bbfcf- kube-system c2d1fca0-3477-43db-ba6a-f3fce7a4c843 850 0 2025-10-30 00:01:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.1.0-n-705ef66fdc coredns-674b8bbfcf-76jv2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali68f0667c2bd [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="93e2a5d034f233d24402ac863bde45f64a012cd80dfc9560bb5e3a40d9e6a5d5" Namespace="kube-system" Pod="coredns-674b8bbfcf-76jv2" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-coredns--674b8bbfcf--76jv2-" Oct 30 00:02:19.022083 containerd[1550]: 2025-10-30 00:02:18.616 [INFO][4259] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="93e2a5d034f233d24402ac863bde45f64a012cd80dfc9560bb5e3a40d9e6a5d5" Namespace="kube-system" Pod="coredns-674b8bbfcf-76jv2" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-coredns--674b8bbfcf--76jv2-eth0" Oct 30 00:02:19.022083 containerd[1550]: 2025-10-30 00:02:18.733 [INFO][4294] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="93e2a5d034f233d24402ac863bde45f64a012cd80dfc9560bb5e3a40d9e6a5d5" HandleID="k8s-pod-network.93e2a5d034f233d24402ac863bde45f64a012cd80dfc9560bb5e3a40d9e6a5d5" Workload="ci--4459.1.0--n--705ef66fdc-k8s-coredns--674b8bbfcf--76jv2-eth0" Oct 30 00:02:19.022083 containerd[1550]: 2025-10-30 00:02:18.734 [INFO][4294] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="93e2a5d034f233d24402ac863bde45f64a012cd80dfc9560bb5e3a40d9e6a5d5" HandleID="k8s-pod-network.93e2a5d034f233d24402ac863bde45f64a012cd80dfc9560bb5e3a40d9e6a5d5" Workload="ci--4459.1.0--n--705ef66fdc-k8s-coredns--674b8bbfcf--76jv2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a0310), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.1.0-n-705ef66fdc", "pod":"coredns-674b8bbfcf-76jv2", "timestamp":"2025-10-30 00:02:18.733752417 +0000 UTC"}, Hostname:"ci-4459.1.0-n-705ef66fdc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:02:19.022083 containerd[1550]: 2025-10-30 00:02:18.734 [INFO][4294] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:02:19.022083 containerd[1550]: 2025-10-30 00:02:18.824 [INFO][4294] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:02:19.022083 containerd[1550]: 2025-10-30 00:02:18.825 [INFO][4294] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-705ef66fdc' Oct 30 00:02:19.022083 containerd[1550]: 2025-10-30 00:02:18.851 [INFO][4294] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.93e2a5d034f233d24402ac863bde45f64a012cd80dfc9560bb5e3a40d9e6a5d5" host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:19.022083 containerd[1550]: 2025-10-30 00:02:18.868 [INFO][4294] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:19.022083 containerd[1550]: 2025-10-30 00:02:18.885 [INFO][4294] ipam/ipam.go 511: Trying affinity for 192.168.70.128/26 host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:19.022083 containerd[1550]: 2025-10-30 00:02:18.888 [INFO][4294] ipam/ipam.go 158: Attempting to load block cidr=192.168.70.128/26 host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:19.022083 containerd[1550]: 2025-10-30 00:02:18.895 [INFO][4294] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.70.128/26 host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:19.022083 containerd[1550]: 2025-10-30 00:02:18.896 [INFO][4294] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.70.128/26 handle="k8s-pod-network.93e2a5d034f233d24402ac863bde45f64a012cd80dfc9560bb5e3a40d9e6a5d5" host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:19.022083 containerd[1550]: 2025-10-30 00:02:18.909 [INFO][4294] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.93e2a5d034f233d24402ac863bde45f64a012cd80dfc9560bb5e3a40d9e6a5d5 Oct 30 00:02:19.022083 containerd[1550]: 2025-10-30 00:02:18.923 [INFO][4294] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.70.128/26 handle="k8s-pod-network.93e2a5d034f233d24402ac863bde45f64a012cd80dfc9560bb5e3a40d9e6a5d5" host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:19.022083 containerd[1550]: 2025-10-30 00:02:18.935 [INFO][4294] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.70.132/26] block=192.168.70.128/26 handle="k8s-pod-network.93e2a5d034f233d24402ac863bde45f64a012cd80dfc9560bb5e3a40d9e6a5d5" host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:19.022083 containerd[1550]: 2025-10-30 00:02:18.936 [INFO][4294] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.70.132/26] handle="k8s-pod-network.93e2a5d034f233d24402ac863bde45f64a012cd80dfc9560bb5e3a40d9e6a5d5" host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:19.022083 containerd[1550]: 2025-10-30 00:02:18.936 [INFO][4294] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:02:19.022083 containerd[1550]: 2025-10-30 00:02:18.936 [INFO][4294] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.70.132/26] IPv6=[] ContainerID="93e2a5d034f233d24402ac863bde45f64a012cd80dfc9560bb5e3a40d9e6a5d5" HandleID="k8s-pod-network.93e2a5d034f233d24402ac863bde45f64a012cd80dfc9560bb5e3a40d9e6a5d5" Workload="ci--4459.1.0--n--705ef66fdc-k8s-coredns--674b8bbfcf--76jv2-eth0" Oct 30 00:02:19.022851 containerd[1550]: 2025-10-30 00:02:18.947 [INFO][4259] cni-plugin/k8s.go 418: Populated endpoint ContainerID="93e2a5d034f233d24402ac863bde45f64a012cd80dfc9560bb5e3a40d9e6a5d5" Namespace="kube-system" Pod="coredns-674b8bbfcf-76jv2" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-coredns--674b8bbfcf--76jv2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--705ef66fdc-k8s-coredns--674b8bbfcf--76jv2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c2d1fca0-3477-43db-ba6a-f3fce7a4c843", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 1, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-705ef66fdc", ContainerID:"", Pod:"coredns-674b8bbfcf-76jv2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.70.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali68f0667c2bd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:02:19.022851 containerd[1550]: 2025-10-30 00:02:18.947 [INFO][4259] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.70.132/32] ContainerID="93e2a5d034f233d24402ac863bde45f64a012cd80dfc9560bb5e3a40d9e6a5d5" Namespace="kube-system" Pod="coredns-674b8bbfcf-76jv2" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-coredns--674b8bbfcf--76jv2-eth0" Oct 30 00:02:19.022851 containerd[1550]: 2025-10-30 00:02:18.947 [INFO][4259] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali68f0667c2bd ContainerID="93e2a5d034f233d24402ac863bde45f64a012cd80dfc9560bb5e3a40d9e6a5d5" Namespace="kube-system" Pod="coredns-674b8bbfcf-76jv2" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-coredns--674b8bbfcf--76jv2-eth0" Oct 30 00:02:19.022851 containerd[1550]: 2025-10-30 00:02:18.974 [INFO][4259] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="93e2a5d034f233d24402ac863bde45f64a012cd80dfc9560bb5e3a40d9e6a5d5" Namespace="kube-system" Pod="coredns-674b8bbfcf-76jv2" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-coredns--674b8bbfcf--76jv2-eth0" Oct 30 00:02:19.022851 containerd[1550]: 2025-10-30 00:02:18.978 [INFO][4259] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="93e2a5d034f233d24402ac863bde45f64a012cd80dfc9560bb5e3a40d9e6a5d5" Namespace="kube-system" Pod="coredns-674b8bbfcf-76jv2" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-coredns--674b8bbfcf--76jv2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--705ef66fdc-k8s-coredns--674b8bbfcf--76jv2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c2d1fca0-3477-43db-ba6a-f3fce7a4c843", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 1, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-705ef66fdc", ContainerID:"93e2a5d034f233d24402ac863bde45f64a012cd80dfc9560bb5e3a40d9e6a5d5", Pod:"coredns-674b8bbfcf-76jv2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.70.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali68f0667c2bd", MAC:"a2:e7:e0:a4:c2:92", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:02:19.022851 containerd[1550]: 2025-10-30 00:02:19.013 [INFO][4259] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="93e2a5d034f233d24402ac863bde45f64a012cd80dfc9560bb5e3a40d9e6a5d5" Namespace="kube-system" Pod="coredns-674b8bbfcf-76jv2" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-coredns--674b8bbfcf--76jv2-eth0" Oct 30 00:02:19.036684 systemd[1]: Started cri-containerd-4604e25cdd2afd487540b3eb5e301b12067c22a711f40696315efd0acd054138.scope - libcontainer container 4604e25cdd2afd487540b3eb5e301b12067c22a711f40696315efd0acd054138. Oct 30 00:02:19.128507 systemd-networkd[1423]: cali0a8744067f2: Link UP Oct 30 00:02:19.133266 systemd-networkd[1423]: cali0a8744067f2: Gained carrier Oct 30 00:02:19.202652 containerd[1550]: time="2025-10-30T00:02:19.202594641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5kjbz,Uid:7c170e9a-cada-41cd-bd7c-14ab708f01d4,Namespace:calico-system,Attempt:0,} returns sandbox id \"4604e25cdd2afd487540b3eb5e301b12067c22a711f40696315efd0acd054138\"" Oct 30 00:02:19.207236 containerd[1550]: 2025-10-30 00:02:18.620 [INFO][4242] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--705ef66fdc-k8s-calico--apiserver--74894d766f--gm29g-eth0 calico-apiserver-74894d766f- calico-apiserver 040b195d-6ed8-46f6-9e09-7aab95a4cc1d 860 0 2025-10-30 00:01:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:74894d766f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.1.0-n-705ef66fdc calico-apiserver-74894d766f-gm29g eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0a8744067f2 [] [] }} ContainerID="886e51acdfa90101014683c374ed5302fde8faf7d608132165f32668a200333c" Namespace="calico-apiserver" Pod="calico-apiserver-74894d766f-gm29g" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-calico--apiserver--74894d766f--gm29g-" Oct 30 00:02:19.207236 containerd[1550]: 2025-10-30 00:02:18.621 [INFO][4242] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="886e51acdfa90101014683c374ed5302fde8faf7d608132165f32668a200333c" Namespace="calico-apiserver" Pod="calico-apiserver-74894d766f-gm29g" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-calico--apiserver--74894d766f--gm29g-eth0" Oct 30 00:02:19.207236 containerd[1550]: 2025-10-30 00:02:18.788 [INFO][4301] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="886e51acdfa90101014683c374ed5302fde8faf7d608132165f32668a200333c" HandleID="k8s-pod-network.886e51acdfa90101014683c374ed5302fde8faf7d608132165f32668a200333c" Workload="ci--4459.1.0--n--705ef66fdc-k8s-calico--apiserver--74894d766f--gm29g-eth0" Oct 30 00:02:19.207236 containerd[1550]: 2025-10-30 00:02:18.788 [INFO][4301] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="886e51acdfa90101014683c374ed5302fde8faf7d608132165f32668a200333c" HandleID="k8s-pod-network.886e51acdfa90101014683c374ed5302fde8faf7d608132165f32668a200333c" Workload="ci--4459.1.0--n--705ef66fdc-k8s-calico--apiserver--74894d766f--gm29g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7130), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.1.0-n-705ef66fdc", "pod":"calico-apiserver-74894d766f-gm29g", "timestamp":"2025-10-30 00:02:18.788252863 +0000 UTC"}, Hostname:"ci-4459.1.0-n-705ef66fdc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:02:19.207236 containerd[1550]: 2025-10-30 00:02:18.788 [INFO][4301] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:02:19.207236 containerd[1550]: 2025-10-30 00:02:18.936 [INFO][4301] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:02:19.207236 containerd[1550]: 2025-10-30 00:02:18.936 [INFO][4301] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-705ef66fdc' Oct 30 00:02:19.207236 containerd[1550]: 2025-10-30 00:02:18.964 [INFO][4301] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.886e51acdfa90101014683c374ed5302fde8faf7d608132165f32668a200333c" host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:19.207236 containerd[1550]: 2025-10-30 00:02:18.992 [INFO][4301] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:19.207236 containerd[1550]: 2025-10-30 00:02:19.004 [INFO][4301] ipam/ipam.go 511: Trying affinity for 192.168.70.128/26 host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:19.207236 containerd[1550]: 2025-10-30 00:02:19.014 [INFO][4301] ipam/ipam.go 158: Attempting to load block cidr=192.168.70.128/26 host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:19.207236 containerd[1550]: 2025-10-30 00:02:19.023 [INFO][4301] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.70.128/26 host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:19.207236 containerd[1550]: 2025-10-30 00:02:19.023 [INFO][4301] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.70.128/26 handle="k8s-pod-network.886e51acdfa90101014683c374ed5302fde8faf7d608132165f32668a200333c" host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:19.207236 containerd[1550]: 2025-10-30 00:02:19.031 [INFO][4301] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.886e51acdfa90101014683c374ed5302fde8faf7d608132165f32668a200333c Oct 30 00:02:19.207236 containerd[1550]: 2025-10-30 00:02:19.049 [INFO][4301] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.70.128/26 handle="k8s-pod-network.886e51acdfa90101014683c374ed5302fde8faf7d608132165f32668a200333c" host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:19.207236 containerd[1550]: 2025-10-30 00:02:19.083 [INFO][4301] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.70.133/26] block=192.168.70.128/26 handle="k8s-pod-network.886e51acdfa90101014683c374ed5302fde8faf7d608132165f32668a200333c" host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:19.207236 containerd[1550]: 2025-10-30 00:02:19.084 [INFO][4301] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.70.133/26] handle="k8s-pod-network.886e51acdfa90101014683c374ed5302fde8faf7d608132165f32668a200333c" host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:19.207236 containerd[1550]: 2025-10-30 00:02:19.085 [INFO][4301] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:02:19.207236 containerd[1550]: 2025-10-30 00:02:19.086 [INFO][4301] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.70.133/26] IPv6=[] ContainerID="886e51acdfa90101014683c374ed5302fde8faf7d608132165f32668a200333c" HandleID="k8s-pod-network.886e51acdfa90101014683c374ed5302fde8faf7d608132165f32668a200333c" Workload="ci--4459.1.0--n--705ef66fdc-k8s-calico--apiserver--74894d766f--gm29g-eth0" Oct 30 00:02:19.209496 containerd[1550]: 2025-10-30 00:02:19.096 [INFO][4242] cni-plugin/k8s.go 418: Populated endpoint ContainerID="886e51acdfa90101014683c374ed5302fde8faf7d608132165f32668a200333c" Namespace="calico-apiserver" Pod="calico-apiserver-74894d766f-gm29g" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-calico--apiserver--74894d766f--gm29g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--705ef66fdc-k8s-calico--apiserver--74894d766f--gm29g-eth0", GenerateName:"calico-apiserver-74894d766f-", Namespace:"calico-apiserver", SelfLink:"", UID:"040b195d-6ed8-46f6-9e09-7aab95a4cc1d", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 1, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74894d766f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-705ef66fdc", ContainerID:"", Pod:"calico-apiserver-74894d766f-gm29g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.70.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0a8744067f2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:02:19.209496 containerd[1550]: 2025-10-30 00:02:19.096 [INFO][4242] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.70.133/32] ContainerID="886e51acdfa90101014683c374ed5302fde8faf7d608132165f32668a200333c" Namespace="calico-apiserver" Pod="calico-apiserver-74894d766f-gm29g" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-calico--apiserver--74894d766f--gm29g-eth0" Oct 30 00:02:19.209496 containerd[1550]: 2025-10-30 00:02:19.096 [INFO][4242] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0a8744067f2 ContainerID="886e51acdfa90101014683c374ed5302fde8faf7d608132165f32668a200333c" Namespace="calico-apiserver" Pod="calico-apiserver-74894d766f-gm29g" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-calico--apiserver--74894d766f--gm29g-eth0" Oct 30 00:02:19.209496 containerd[1550]: 2025-10-30 00:02:19.142 [INFO][4242] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="886e51acdfa90101014683c374ed5302fde8faf7d608132165f32668a200333c" Namespace="calico-apiserver" Pod="calico-apiserver-74894d766f-gm29g" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-calico--apiserver--74894d766f--gm29g-eth0" Oct 30 00:02:19.209496 containerd[1550]: 2025-10-30 00:02:19.144 [INFO][4242] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="886e51acdfa90101014683c374ed5302fde8faf7d608132165f32668a200333c" Namespace="calico-apiserver" Pod="calico-apiserver-74894d766f-gm29g" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-calico--apiserver--74894d766f--gm29g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--705ef66fdc-k8s-calico--apiserver--74894d766f--gm29g-eth0", GenerateName:"calico-apiserver-74894d766f-", Namespace:"calico-apiserver", SelfLink:"", UID:"040b195d-6ed8-46f6-9e09-7aab95a4cc1d", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 1, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74894d766f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-705ef66fdc", ContainerID:"886e51acdfa90101014683c374ed5302fde8faf7d608132165f32668a200333c", Pod:"calico-apiserver-74894d766f-gm29g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.70.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0a8744067f2", MAC:"52:8a:b3:5f:41:47", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:02:19.209496 containerd[1550]: 2025-10-30 00:02:19.186 [INFO][4242] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="886e51acdfa90101014683c374ed5302fde8faf7d608132165f32668a200333c" Namespace="calico-apiserver" Pod="calico-apiserver-74894d766f-gm29g" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-calico--apiserver--74894d766f--gm29g-eth0" Oct 30 00:02:19.211448 containerd[1550]: time="2025-10-30T00:02:19.210366448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 30 00:02:19.227468 containerd[1550]: time="2025-10-30T00:02:19.227422697Z" level=info msg="connecting to shim 93e2a5d034f233d24402ac863bde45f64a012cd80dfc9560bb5e3a40d9e6a5d5" address="unix:///run/containerd/s/172b3ce79d653a71a51f437ac9a2e0d24f052c9d476b07eea39e8f179b9e5d4e" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:02:19.287579 systemd-networkd[1423]: cali1f16fe052fa: Link UP Oct 30 00:02:19.292390 systemd-networkd[1423]: cali1f16fe052fa: Gained carrier Oct 30 00:02:19.318551 containerd[1550]: time="2025-10-30T00:02:19.317938830Z" level=info msg="connecting to shim 886e51acdfa90101014683c374ed5302fde8faf7d608132165f32668a200333c" address="unix:///run/containerd/s/9ae501ccba0e252843899a2aba98e0f46f99807484dbfb90f95191c81db6a939" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:02:19.319331 containerd[1550]: 2025-10-30 00:02:18.622 [INFO][4248] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--705ef66fdc-k8s-goldmane--666569f655--4nhcg-eth0 goldmane-666569f655- calico-system 640bc622-db68-4e4b-a017-7a6f5994fc43 863 0 2025-10-30 00:01:49 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459.1.0-n-705ef66fdc goldmane-666569f655-4nhcg eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali1f16fe052fa [] [] }} ContainerID="7c971edc6f4a383a53e404aaae1ae5386352a354408910d9db824c8766c98b06" Namespace="calico-system" Pod="goldmane-666569f655-4nhcg" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-goldmane--666569f655--4nhcg-" Oct 30 00:02:19.319331 containerd[1550]: 2025-10-30 00:02:18.622 [INFO][4248] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7c971edc6f4a383a53e404aaae1ae5386352a354408910d9db824c8766c98b06" Namespace="calico-system" Pod="goldmane-666569f655-4nhcg" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-goldmane--666569f655--4nhcg-eth0" Oct 30 00:02:19.319331 containerd[1550]: 2025-10-30 00:02:18.800 [INFO][4299] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7c971edc6f4a383a53e404aaae1ae5386352a354408910d9db824c8766c98b06" HandleID="k8s-pod-network.7c971edc6f4a383a53e404aaae1ae5386352a354408910d9db824c8766c98b06" Workload="ci--4459.1.0--n--705ef66fdc-k8s-goldmane--666569f655--4nhcg-eth0" Oct 30 00:02:19.319331 containerd[1550]: 2025-10-30 00:02:18.800 [INFO][4299] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7c971edc6f4a383a53e404aaae1ae5386352a354408910d9db824c8766c98b06" HandleID="k8s-pod-network.7c971edc6f4a383a53e404aaae1ae5386352a354408910d9db824c8766c98b06" Workload="ci--4459.1.0--n--705ef66fdc-k8s-goldmane--666569f655--4nhcg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000335a20), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.1.0-n-705ef66fdc", "pod":"goldmane-666569f655-4nhcg", "timestamp":"2025-10-30 00:02:18.800000938 +0000 UTC"}, Hostname:"ci-4459.1.0-n-705ef66fdc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:02:19.319331 containerd[1550]: 2025-10-30 00:02:18.800 [INFO][4299] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:02:19.319331 containerd[1550]: 2025-10-30 00:02:19.085 [INFO][4299] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:02:19.319331 containerd[1550]: 2025-10-30 00:02:19.086 [INFO][4299] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-705ef66fdc' Oct 30 00:02:19.319331 containerd[1550]: 2025-10-30 00:02:19.119 [INFO][4299] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7c971edc6f4a383a53e404aaae1ae5386352a354408910d9db824c8766c98b06" host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:19.319331 containerd[1550]: 2025-10-30 00:02:19.165 [INFO][4299] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:19.319331 containerd[1550]: 2025-10-30 00:02:19.194 [INFO][4299] ipam/ipam.go 511: Trying affinity for 192.168.70.128/26 host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:19.319331 containerd[1550]: 2025-10-30 00:02:19.201 [INFO][4299] ipam/ipam.go 158: Attempting to load block cidr=192.168.70.128/26 host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:19.319331 containerd[1550]: 2025-10-30 00:02:19.213 [INFO][4299] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.70.128/26 host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:19.319331 containerd[1550]: 2025-10-30 00:02:19.213 [INFO][4299] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.70.128/26 handle="k8s-pod-network.7c971edc6f4a383a53e404aaae1ae5386352a354408910d9db824c8766c98b06" host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:19.319331 containerd[1550]: 2025-10-30 00:02:19.220 [INFO][4299] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7c971edc6f4a383a53e404aaae1ae5386352a354408910d9db824c8766c98b06 Oct 30 00:02:19.319331 containerd[1550]: 2025-10-30 00:02:19.232 [INFO][4299] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.70.128/26 handle="k8s-pod-network.7c971edc6f4a383a53e404aaae1ae5386352a354408910d9db824c8766c98b06" host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:19.319331 containerd[1550]: 2025-10-30 00:02:19.244 [INFO][4299] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.70.134/26] block=192.168.70.128/26 handle="k8s-pod-network.7c971edc6f4a383a53e404aaae1ae5386352a354408910d9db824c8766c98b06" host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:19.319331 containerd[1550]: 2025-10-30 00:02:19.245 [INFO][4299] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.70.134/26] handle="k8s-pod-network.7c971edc6f4a383a53e404aaae1ae5386352a354408910d9db824c8766c98b06" host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:19.319331 containerd[1550]: 2025-10-30 00:02:19.245 [INFO][4299] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:02:19.319331 containerd[1550]: 2025-10-30 00:02:19.245 [INFO][4299] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.70.134/26] IPv6=[] ContainerID="7c971edc6f4a383a53e404aaae1ae5386352a354408910d9db824c8766c98b06" HandleID="k8s-pod-network.7c971edc6f4a383a53e404aaae1ae5386352a354408910d9db824c8766c98b06" Workload="ci--4459.1.0--n--705ef66fdc-k8s-goldmane--666569f655--4nhcg-eth0" Oct 30 00:02:19.322046 containerd[1550]: 2025-10-30 00:02:19.258 [INFO][4248] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7c971edc6f4a383a53e404aaae1ae5386352a354408910d9db824c8766c98b06" Namespace="calico-system" Pod="goldmane-666569f655-4nhcg" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-goldmane--666569f655--4nhcg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--705ef66fdc-k8s-goldmane--666569f655--4nhcg-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"640bc622-db68-4e4b-a017-7a6f5994fc43", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 1, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-705ef66fdc", ContainerID:"", Pod:"goldmane-666569f655-4nhcg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.70.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1f16fe052fa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:02:19.322046 containerd[1550]: 2025-10-30 00:02:19.258 [INFO][4248] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.70.134/32] ContainerID="7c971edc6f4a383a53e404aaae1ae5386352a354408910d9db824c8766c98b06" Namespace="calico-system" Pod="goldmane-666569f655-4nhcg" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-goldmane--666569f655--4nhcg-eth0" Oct 30 00:02:19.322046 containerd[1550]: 2025-10-30 00:02:19.258 [INFO][4248] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1f16fe052fa ContainerID="7c971edc6f4a383a53e404aaae1ae5386352a354408910d9db824c8766c98b06" Namespace="calico-system" Pod="goldmane-666569f655-4nhcg" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-goldmane--666569f655--4nhcg-eth0" Oct 30 00:02:19.322046 containerd[1550]: 2025-10-30 00:02:19.291 [INFO][4248] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7c971edc6f4a383a53e404aaae1ae5386352a354408910d9db824c8766c98b06" Namespace="calico-system" Pod="goldmane-666569f655-4nhcg" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-goldmane--666569f655--4nhcg-eth0" Oct 30 00:02:19.322046 containerd[1550]: 2025-10-30 00:02:19.292 [INFO][4248] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7c971edc6f4a383a53e404aaae1ae5386352a354408910d9db824c8766c98b06" Namespace="calico-system" Pod="goldmane-666569f655-4nhcg" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-goldmane--666569f655--4nhcg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--705ef66fdc-k8s-goldmane--666569f655--4nhcg-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"640bc622-db68-4e4b-a017-7a6f5994fc43", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 1, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-705ef66fdc", ContainerID:"7c971edc6f4a383a53e404aaae1ae5386352a354408910d9db824c8766c98b06", Pod:"goldmane-666569f655-4nhcg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.70.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1f16fe052fa", MAC:"e2:fc:9a:df:ca:c7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:02:19.322046 containerd[1550]: 2025-10-30 00:02:19.304 [INFO][4248] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7c971edc6f4a383a53e404aaae1ae5386352a354408910d9db824c8766c98b06" Namespace="calico-system" Pod="goldmane-666569f655-4nhcg" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-goldmane--666569f655--4nhcg-eth0" Oct 30 00:02:19.336572 systemd[1]: Started cri-containerd-93e2a5d034f233d24402ac863bde45f64a012cd80dfc9560bb5e3a40d9e6a5d5.scope - libcontainer container 93e2a5d034f233d24402ac863bde45f64a012cd80dfc9560bb5e3a40d9e6a5d5. Oct 30 00:02:19.403459 containerd[1550]: time="2025-10-30T00:02:19.403411080Z" level=info msg="connecting to shim 7c971edc6f4a383a53e404aaae1ae5386352a354408910d9db824c8766c98b06" address="unix:///run/containerd/s/aa55d62ffdd354d9029f8cd8fe54bbab8106fb0c9996a2d0aff32157c7bff365" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:02:19.415905 systemd[1]: Started cri-containerd-886e51acdfa90101014683c374ed5302fde8faf7d608132165f32668a200333c.scope - libcontainer container 886e51acdfa90101014683c374ed5302fde8faf7d608132165f32668a200333c. Oct 30 00:02:19.419739 containerd[1550]: time="2025-10-30T00:02:19.419296461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8cb7cb6c6-ml8rk,Uid:9a28bd97-c5f6-423b-9db0-587ee8384ffe,Namespace:calico-system,Attempt:0,}" Oct 30 00:02:19.481296 systemd[1]: Started cri-containerd-7c971edc6f4a383a53e404aaae1ae5386352a354408910d9db824c8766c98b06.scope - libcontainer container 7c971edc6f4a383a53e404aaae1ae5386352a354408910d9db824c8766c98b06. Oct 30 00:02:19.491611 containerd[1550]: time="2025-10-30T00:02:19.491563169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-76jv2,Uid:c2d1fca0-3477-43db-ba6a-f3fce7a4c843,Namespace:kube-system,Attempt:0,} returns sandbox id \"93e2a5d034f233d24402ac863bde45f64a012cd80dfc9560bb5e3a40d9e6a5d5\"" Oct 30 00:02:19.494458 kubelet[2706]: E1030 00:02:19.493346 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:02:19.499322 containerd[1550]: time="2025-10-30T00:02:19.499206800Z" level=info msg="CreateContainer within sandbox \"93e2a5d034f233d24402ac863bde45f64a012cd80dfc9560bb5e3a40d9e6a5d5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 30 00:02:19.520203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3684157732.mount: Deactivated successfully. Oct 30 00:02:19.524321 containerd[1550]: time="2025-10-30T00:02:19.522131546Z" level=info msg="Container ec1dbb54d6130e8b3cbf1610d9a519b629bbefffb99f9a8cc7a5a0949b0b2bba: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:02:19.566229 containerd[1550]: time="2025-10-30T00:02:19.564791592Z" level=info msg="CreateContainer within sandbox \"93e2a5d034f233d24402ac863bde45f64a012cd80dfc9560bb5e3a40d9e6a5d5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ec1dbb54d6130e8b3cbf1610d9a519b629bbefffb99f9a8cc7a5a0949b0b2bba\"" Oct 30 00:02:19.570693 containerd[1550]: time="2025-10-30T00:02:19.570544763Z" level=info msg="StartContainer for \"ec1dbb54d6130e8b3cbf1610d9a519b629bbefffb99f9a8cc7a5a0949b0b2bba\"" Oct 30 00:02:19.582264 containerd[1550]: time="2025-10-30T00:02:19.582194293Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:02:19.585148 containerd[1550]: time="2025-10-30T00:02:19.584534747Z" level=info msg="connecting to shim ec1dbb54d6130e8b3cbf1610d9a519b629bbefffb99f9a8cc7a5a0949b0b2bba" address="unix:///run/containerd/s/172b3ce79d653a71a51f437ac9a2e0d24f052c9d476b07eea39e8f179b9e5d4e" protocol=ttrpc version=3 Oct 30 00:02:19.586889 containerd[1550]: time="2025-10-30T00:02:19.586420107Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 30 00:02:19.588009 containerd[1550]: time="2025-10-30T00:02:19.587141592Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 30 00:02:19.588380 kubelet[2706]: E1030 00:02:19.588233 2706 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 00:02:19.588845 kubelet[2706]: E1030 00:02:19.588512 2706 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 00:02:19.591313 kubelet[2706]: E1030 00:02:19.591226 2706 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9gfhq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5kjbz_calico-system(7c170e9a-cada-41cd-bd7c-14ab708f01d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 30 00:02:19.594113 containerd[1550]: time="2025-10-30T00:02:19.593997921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 30 00:02:19.649372 systemd[1]: Started cri-containerd-ec1dbb54d6130e8b3cbf1610d9a519b629bbefffb99f9a8cc7a5a0949b0b2bba.scope - libcontainer container ec1dbb54d6130e8b3cbf1610d9a519b629bbefffb99f9a8cc7a5a0949b0b2bba. Oct 30 00:02:19.767560 kubelet[2706]: E1030 00:02:19.767500 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:02:19.769465 containerd[1550]: time="2025-10-30T00:02:19.769354857Z" level=info msg="StartContainer for \"ec1dbb54d6130e8b3cbf1610d9a519b629bbefffb99f9a8cc7a5a0949b0b2bba\" returns successfully" Oct 30 00:02:19.840515 containerd[1550]: time="2025-10-30T00:02:19.840313549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-4nhcg,Uid:640bc622-db68-4e4b-a017-7a6f5994fc43,Namespace:calico-system,Attempt:0,} returns sandbox id \"7c971edc6f4a383a53e404aaae1ae5386352a354408910d9db824c8766c98b06\"" Oct 30 00:02:19.886881 systemd-networkd[1423]: calie7360908ed5: Link UP Oct 30 00:02:19.890839 systemd-networkd[1423]: calie7360908ed5: Gained carrier Oct 30 00:02:19.941407 containerd[1550]: 2025-10-30 00:02:19.642 [INFO][4502] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--705ef66fdc-k8s-calico--kube--controllers--8cb7cb6c6--ml8rk-eth0 calico-kube-controllers-8cb7cb6c6- calico-system 9a28bd97-c5f6-423b-9db0-587ee8384ffe 852 0 2025-10-30 00:01:52 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:8cb7cb6c6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459.1.0-n-705ef66fdc calico-kube-controllers-8cb7cb6c6-ml8rk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie7360908ed5 [] [] }} ContainerID="cb606878b5efc84e4d01876bc7fc3d3e12cb89cfd48b28ad63cc90aeb0e18761" Namespace="calico-system" Pod="calico-kube-controllers-8cb7cb6c6-ml8rk" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-calico--kube--controllers--8cb7cb6c6--ml8rk-" Oct 30 00:02:19.941407 containerd[1550]: 2025-10-30 00:02:19.644 [INFO][4502] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cb606878b5efc84e4d01876bc7fc3d3e12cb89cfd48b28ad63cc90aeb0e18761" Namespace="calico-system" Pod="calico-kube-controllers-8cb7cb6c6-ml8rk" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-calico--kube--controllers--8cb7cb6c6--ml8rk-eth0" Oct 30 00:02:19.941407 containerd[1550]: 2025-10-30 00:02:19.782 [INFO][4549] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cb606878b5efc84e4d01876bc7fc3d3e12cb89cfd48b28ad63cc90aeb0e18761" HandleID="k8s-pod-network.cb606878b5efc84e4d01876bc7fc3d3e12cb89cfd48b28ad63cc90aeb0e18761" Workload="ci--4459.1.0--n--705ef66fdc-k8s-calico--kube--controllers--8cb7cb6c6--ml8rk-eth0" Oct 30 00:02:19.941407 containerd[1550]: 2025-10-30 00:02:19.785 [INFO][4549] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cb606878b5efc84e4d01876bc7fc3d3e12cb89cfd48b28ad63cc90aeb0e18761" HandleID="k8s-pod-network.cb606878b5efc84e4d01876bc7fc3d3e12cb89cfd48b28ad63cc90aeb0e18761" Workload="ci--4459.1.0--n--705ef66fdc-k8s-calico--kube--controllers--8cb7cb6c6--ml8rk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001027b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.1.0-n-705ef66fdc", "pod":"calico-kube-controllers-8cb7cb6c6-ml8rk", "timestamp":"2025-10-30 00:02:19.782920604 +0000 UTC"}, Hostname:"ci-4459.1.0-n-705ef66fdc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:02:19.941407 containerd[1550]: 2025-10-30 00:02:19.786 [INFO][4549] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:02:19.941407 containerd[1550]: 2025-10-30 00:02:19.786 [INFO][4549] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:02:19.941407 containerd[1550]: 2025-10-30 00:02:19.786 [INFO][4549] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-705ef66fdc' Oct 30 00:02:19.941407 containerd[1550]: 2025-10-30 00:02:19.811 [INFO][4549] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cb606878b5efc84e4d01876bc7fc3d3e12cb89cfd48b28ad63cc90aeb0e18761" host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:19.941407 containerd[1550]: 2025-10-30 00:02:19.825 [INFO][4549] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:19.941407 containerd[1550]: 2025-10-30 00:02:19.845 [INFO][4549] ipam/ipam.go 511: Trying affinity for 192.168.70.128/26 host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:19.941407 containerd[1550]: 2025-10-30 00:02:19.849 [INFO][4549] ipam/ipam.go 158: Attempting to load block cidr=192.168.70.128/26 host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:19.941407 containerd[1550]: 2025-10-30 00:02:19.853 [INFO][4549] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.70.128/26 host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:19.941407 containerd[1550]: 2025-10-30 00:02:19.853 [INFO][4549] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.70.128/26 handle="k8s-pod-network.cb606878b5efc84e4d01876bc7fc3d3e12cb89cfd48b28ad63cc90aeb0e18761" host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:19.941407 containerd[1550]: 2025-10-30 00:02:19.856 [INFO][4549] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cb606878b5efc84e4d01876bc7fc3d3e12cb89cfd48b28ad63cc90aeb0e18761 Oct 30 00:02:19.941407 containerd[1550]: 2025-10-30 00:02:19.864 [INFO][4549] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.70.128/26 handle="k8s-pod-network.cb606878b5efc84e4d01876bc7fc3d3e12cb89cfd48b28ad63cc90aeb0e18761" host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:19.941407 containerd[1550]: 2025-10-30 00:02:19.876 [INFO][4549] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.70.135/26] block=192.168.70.128/26 handle="k8s-pod-network.cb606878b5efc84e4d01876bc7fc3d3e12cb89cfd48b28ad63cc90aeb0e18761" host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:19.941407 containerd[1550]: 2025-10-30 00:02:19.876 [INFO][4549] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.70.135/26] handle="k8s-pod-network.cb606878b5efc84e4d01876bc7fc3d3e12cb89cfd48b28ad63cc90aeb0e18761" host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:19.941407 containerd[1550]: 2025-10-30 00:02:19.876 [INFO][4549] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:02:19.941407 containerd[1550]: 2025-10-30 00:02:19.876 [INFO][4549] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.70.135/26] IPv6=[] ContainerID="cb606878b5efc84e4d01876bc7fc3d3e12cb89cfd48b28ad63cc90aeb0e18761" HandleID="k8s-pod-network.cb606878b5efc84e4d01876bc7fc3d3e12cb89cfd48b28ad63cc90aeb0e18761" Workload="ci--4459.1.0--n--705ef66fdc-k8s-calico--kube--controllers--8cb7cb6c6--ml8rk-eth0" Oct 30 00:02:19.942983 containerd[1550]: 2025-10-30 00:02:19.879 [INFO][4502] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cb606878b5efc84e4d01876bc7fc3d3e12cb89cfd48b28ad63cc90aeb0e18761" Namespace="calico-system" Pod="calico-kube-controllers-8cb7cb6c6-ml8rk" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-calico--kube--controllers--8cb7cb6c6--ml8rk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--705ef66fdc-k8s-calico--kube--controllers--8cb7cb6c6--ml8rk-eth0", GenerateName:"calico-kube-controllers-8cb7cb6c6-", Namespace:"calico-system", SelfLink:"", UID:"9a28bd97-c5f6-423b-9db0-587ee8384ffe", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 1, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8cb7cb6c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-705ef66fdc", ContainerID:"", Pod:"calico-kube-controllers-8cb7cb6c6-ml8rk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.70.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie7360908ed5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:02:19.942983 containerd[1550]: 2025-10-30 00:02:19.880 [INFO][4502] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.70.135/32] ContainerID="cb606878b5efc84e4d01876bc7fc3d3e12cb89cfd48b28ad63cc90aeb0e18761" Namespace="calico-system" Pod="calico-kube-controllers-8cb7cb6c6-ml8rk" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-calico--kube--controllers--8cb7cb6c6--ml8rk-eth0" Oct 30 00:02:19.942983 containerd[1550]: 2025-10-30 00:02:19.880 [INFO][4502] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie7360908ed5 ContainerID="cb606878b5efc84e4d01876bc7fc3d3e12cb89cfd48b28ad63cc90aeb0e18761" Namespace="calico-system" Pod="calico-kube-controllers-8cb7cb6c6-ml8rk" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-calico--kube--controllers--8cb7cb6c6--ml8rk-eth0" Oct 30 00:02:19.942983 containerd[1550]: 2025-10-30 00:02:19.896 [INFO][4502] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cb606878b5efc84e4d01876bc7fc3d3e12cb89cfd48b28ad63cc90aeb0e18761" Namespace="calico-system" Pod="calico-kube-controllers-8cb7cb6c6-ml8rk" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-calico--kube--controllers--8cb7cb6c6--ml8rk-eth0" Oct 30 00:02:19.942983 containerd[1550]: 2025-10-30 00:02:19.897 [INFO][4502] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cb606878b5efc84e4d01876bc7fc3d3e12cb89cfd48b28ad63cc90aeb0e18761" Namespace="calico-system" Pod="calico-kube-controllers-8cb7cb6c6-ml8rk" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-calico--kube--controllers--8cb7cb6c6--ml8rk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--705ef66fdc-k8s-calico--kube--controllers--8cb7cb6c6--ml8rk-eth0", GenerateName:"calico-kube-controllers-8cb7cb6c6-", Namespace:"calico-system", SelfLink:"", UID:"9a28bd97-c5f6-423b-9db0-587ee8384ffe", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 1, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8cb7cb6c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-705ef66fdc", ContainerID:"cb606878b5efc84e4d01876bc7fc3d3e12cb89cfd48b28ad63cc90aeb0e18761", Pod:"calico-kube-controllers-8cb7cb6c6-ml8rk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.70.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie7360908ed5", MAC:"26:27:7f:10:61:c0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:02:19.942983 containerd[1550]: 2025-10-30 00:02:19.932 [INFO][4502] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cb606878b5efc84e4d01876bc7fc3d3e12cb89cfd48b28ad63cc90aeb0e18761" Namespace="calico-system" Pod="calico-kube-controllers-8cb7cb6c6-ml8rk" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-calico--kube--controllers--8cb7cb6c6--ml8rk-eth0" Oct 30 00:02:19.963782 containerd[1550]: time="2025-10-30T00:02:19.963701393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74894d766f-gm29g,Uid:040b195d-6ed8-46f6-9e09-7aab95a4cc1d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"886e51acdfa90101014683c374ed5302fde8faf7d608132165f32668a200333c\"" Oct 30 00:02:19.981258 containerd[1550]: time="2025-10-30T00:02:19.981191204Z" level=info msg="connecting to shim cb606878b5efc84e4d01876bc7fc3d3e12cb89cfd48b28ad63cc90aeb0e18761" address="unix:///run/containerd/s/8ae945dea2603d5609e7b057e0d44448d1541c9dd29aa1a41df7cbc5ebec2752" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:02:20.024340 systemd[1]: Started cri-containerd-cb606878b5efc84e4d01876bc7fc3d3e12cb89cfd48b28ad63cc90aeb0e18761.scope - libcontainer container cb606878b5efc84e4d01876bc7fc3d3e12cb89cfd48b28ad63cc90aeb0e18761. Oct 30 00:02:20.084060 containerd[1550]: time="2025-10-30T00:02:20.083994675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8cb7cb6c6-ml8rk,Uid:9a28bd97-c5f6-423b-9db0-587ee8384ffe,Namespace:calico-system,Attempt:0,} returns sandbox id \"cb606878b5efc84e4d01876bc7fc3d3e12cb89cfd48b28ad63cc90aeb0e18761\"" Oct 30 00:02:20.142994 containerd[1550]: time="2025-10-30T00:02:20.142815602Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:02:20.143883 containerd[1550]: time="2025-10-30T00:02:20.143836564Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 30 00:02:20.144932 containerd[1550]: time="2025-10-30T00:02:20.144024968Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 30 00:02:20.145091 kubelet[2706]: E1030 00:02:20.144267 2706 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 00:02:20.145091 kubelet[2706]: E1030 00:02:20.144325 2706 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 00:02:20.145091 kubelet[2706]: E1030 00:02:20.144547 2706 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9gfhq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5kjbz_calico-system(7c170e9a-cada-41cd-bd7c-14ab708f01d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 30 00:02:20.145872 containerd[1550]: time="2025-10-30T00:02:20.145650769Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 30 00:02:20.146321 kubelet[2706]: E1030 00:02:20.146145 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5kjbz" podUID="7c170e9a-cada-41cd-bd7c-14ab708f01d4" Oct 30 00:02:20.279823 systemd-networkd[1423]: cali0a8744067f2: Gained IPv6LL Oct 30 00:02:20.407507 systemd-networkd[1423]: cali1f16fe052fa: Gained IPv6LL Oct 30 00:02:20.420103 containerd[1550]: time="2025-10-30T00:02:20.419954865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74894d766f-zrklx,Uid:3d001f0c-0d5b-4f3a-ac85-24b0d4c18012,Namespace:calico-apiserver,Attempt:0,}" Oct 30 00:02:20.582598 systemd-networkd[1423]: calia33985f5a1c: Link UP Oct 30 00:02:20.583790 systemd-networkd[1423]: calia33985f5a1c: Gained carrier Oct 30 00:02:20.613029 containerd[1550]: 2025-10-30 00:02:20.486 [INFO][4650] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--n--705ef66fdc-k8s-calico--apiserver--74894d766f--zrklx-eth0 calico-apiserver-74894d766f- calico-apiserver 3d001f0c-0d5b-4f3a-ac85-24b0d4c18012 855 0 2025-10-30 00:01:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:74894d766f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.1.0-n-705ef66fdc calico-apiserver-74894d766f-zrklx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia33985f5a1c [] [] }} ContainerID="ad2d3902fc1e5460c40a90907ddbcaf658a7caad40677a2cf62f43fd28526673" Namespace="calico-apiserver" Pod="calico-apiserver-74894d766f-zrklx" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-calico--apiserver--74894d766f--zrklx-" Oct 30 00:02:20.613029 containerd[1550]: 2025-10-30 00:02:20.486 [INFO][4650] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ad2d3902fc1e5460c40a90907ddbcaf658a7caad40677a2cf62f43fd28526673" Namespace="calico-apiserver" Pod="calico-apiserver-74894d766f-zrklx" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-calico--apiserver--74894d766f--zrklx-eth0" Oct 30 00:02:20.613029 containerd[1550]: 2025-10-30 00:02:20.522 [INFO][4662] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ad2d3902fc1e5460c40a90907ddbcaf658a7caad40677a2cf62f43fd28526673" HandleID="k8s-pod-network.ad2d3902fc1e5460c40a90907ddbcaf658a7caad40677a2cf62f43fd28526673" Workload="ci--4459.1.0--n--705ef66fdc-k8s-calico--apiserver--74894d766f--zrklx-eth0" Oct 30 00:02:20.613029 containerd[1550]: 2025-10-30 00:02:20.522 [INFO][4662] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ad2d3902fc1e5460c40a90907ddbcaf658a7caad40677a2cf62f43fd28526673" HandleID="k8s-pod-network.ad2d3902fc1e5460c40a90907ddbcaf658a7caad40677a2cf62f43fd28526673" Workload="ci--4459.1.0--n--705ef66fdc-k8s-calico--apiserver--74894d766f--zrklx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.1.0-n-705ef66fdc", "pod":"calico-apiserver-74894d766f-zrklx", "timestamp":"2025-10-30 00:02:20.522445475 +0000 UTC"}, Hostname:"ci-4459.1.0-n-705ef66fdc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:02:20.613029 containerd[1550]: 2025-10-30 00:02:20.522 [INFO][4662] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:02:20.613029 containerd[1550]: 2025-10-30 00:02:20.522 [INFO][4662] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:02:20.613029 containerd[1550]: 2025-10-30 00:02:20.522 [INFO][4662] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-n-705ef66fdc' Oct 30 00:02:20.613029 containerd[1550]: 2025-10-30 00:02:20.532 [INFO][4662] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ad2d3902fc1e5460c40a90907ddbcaf658a7caad40677a2cf62f43fd28526673" host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:20.613029 containerd[1550]: 2025-10-30 00:02:20.539 [INFO][4662] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:20.613029 containerd[1550]: 2025-10-30 00:02:20.548 [INFO][4662] ipam/ipam.go 511: Trying affinity for 192.168.70.128/26 host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:20.613029 containerd[1550]: 2025-10-30 00:02:20.550 [INFO][4662] ipam/ipam.go 158: Attempting to load block cidr=192.168.70.128/26 host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:20.613029 containerd[1550]: 2025-10-30 00:02:20.554 [INFO][4662] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.70.128/26 host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:20.613029 containerd[1550]: 2025-10-30 00:02:20.554 [INFO][4662] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.70.128/26 handle="k8s-pod-network.ad2d3902fc1e5460c40a90907ddbcaf658a7caad40677a2cf62f43fd28526673" host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:20.613029 containerd[1550]: 2025-10-30 00:02:20.557 [INFO][4662] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ad2d3902fc1e5460c40a90907ddbcaf658a7caad40677a2cf62f43fd28526673 Oct 30 00:02:20.613029 containerd[1550]: 2025-10-30 00:02:20.564 [INFO][4662] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.70.128/26 handle="k8s-pod-network.ad2d3902fc1e5460c40a90907ddbcaf658a7caad40677a2cf62f43fd28526673" host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:20.613029 containerd[1550]: 2025-10-30 00:02:20.572 [INFO][4662] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.70.136/26] block=192.168.70.128/26 handle="k8s-pod-network.ad2d3902fc1e5460c40a90907ddbcaf658a7caad40677a2cf62f43fd28526673" host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:20.613029 containerd[1550]: 2025-10-30 00:02:20.573 [INFO][4662] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.70.136/26] handle="k8s-pod-network.ad2d3902fc1e5460c40a90907ddbcaf658a7caad40677a2cf62f43fd28526673" host="ci-4459.1.0-n-705ef66fdc" Oct 30 00:02:20.613029 containerd[1550]: 2025-10-30 00:02:20.573 [INFO][4662] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:02:20.613029 containerd[1550]: 2025-10-30 00:02:20.573 [INFO][4662] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.70.136/26] IPv6=[] ContainerID="ad2d3902fc1e5460c40a90907ddbcaf658a7caad40677a2cf62f43fd28526673" HandleID="k8s-pod-network.ad2d3902fc1e5460c40a90907ddbcaf658a7caad40677a2cf62f43fd28526673" Workload="ci--4459.1.0--n--705ef66fdc-k8s-calico--apiserver--74894d766f--zrklx-eth0" Oct 30 00:02:20.615258 containerd[1550]: 2025-10-30 00:02:20.576 [INFO][4650] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ad2d3902fc1e5460c40a90907ddbcaf658a7caad40677a2cf62f43fd28526673" Namespace="calico-apiserver" Pod="calico-apiserver-74894d766f-zrklx" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-calico--apiserver--74894d766f--zrklx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--705ef66fdc-k8s-calico--apiserver--74894d766f--zrklx-eth0", GenerateName:"calico-apiserver-74894d766f-", Namespace:"calico-apiserver", SelfLink:"", UID:"3d001f0c-0d5b-4f3a-ac85-24b0d4c18012", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 1, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74894d766f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-705ef66fdc", ContainerID:"", Pod:"calico-apiserver-74894d766f-zrklx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.70.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia33985f5a1c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:02:20.615258 containerd[1550]: 2025-10-30 00:02:20.576 [INFO][4650] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.70.136/32] ContainerID="ad2d3902fc1e5460c40a90907ddbcaf658a7caad40677a2cf62f43fd28526673" Namespace="calico-apiserver" Pod="calico-apiserver-74894d766f-zrklx" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-calico--apiserver--74894d766f--zrklx-eth0" Oct 30 00:02:20.615258 containerd[1550]: 2025-10-30 00:02:20.576 [INFO][4650] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia33985f5a1c ContainerID="ad2d3902fc1e5460c40a90907ddbcaf658a7caad40677a2cf62f43fd28526673" Namespace="calico-apiserver" Pod="calico-apiserver-74894d766f-zrklx" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-calico--apiserver--74894d766f--zrklx-eth0" Oct 30 00:02:20.615258 containerd[1550]: 2025-10-30 00:02:20.585 [INFO][4650] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ad2d3902fc1e5460c40a90907ddbcaf658a7caad40677a2cf62f43fd28526673" Namespace="calico-apiserver" Pod="calico-apiserver-74894d766f-zrklx" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-calico--apiserver--74894d766f--zrklx-eth0" Oct 30 00:02:20.615258 containerd[1550]: 2025-10-30 00:02:20.586 [INFO][4650] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ad2d3902fc1e5460c40a90907ddbcaf658a7caad40677a2cf62f43fd28526673" Namespace="calico-apiserver" Pod="calico-apiserver-74894d766f-zrklx" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-calico--apiserver--74894d766f--zrklx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--n--705ef66fdc-k8s-calico--apiserver--74894d766f--zrklx-eth0", GenerateName:"calico-apiserver-74894d766f-", Namespace:"calico-apiserver", SelfLink:"", UID:"3d001f0c-0d5b-4f3a-ac85-24b0d4c18012", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 1, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74894d766f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-n-705ef66fdc", ContainerID:"ad2d3902fc1e5460c40a90907ddbcaf658a7caad40677a2cf62f43fd28526673", Pod:"calico-apiserver-74894d766f-zrklx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.70.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia33985f5a1c", MAC:"a2:33:f9:6f:26:e2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:02:20.615258 containerd[1550]: 2025-10-30 00:02:20.607 [INFO][4650] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ad2d3902fc1e5460c40a90907ddbcaf658a7caad40677a2cf62f43fd28526673" Namespace="calico-apiserver" Pod="calico-apiserver-74894d766f-zrklx" WorkloadEndpoint="ci--4459.1.0--n--705ef66fdc-k8s-calico--apiserver--74894d766f--zrklx-eth0" Oct 30 00:02:20.622621 containerd[1550]: time="2025-10-30T00:02:20.622562208Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:02:20.623458 containerd[1550]: time="2025-10-30T00:02:20.623404632Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 30 00:02:20.623622 containerd[1550]: time="2025-10-30T00:02:20.623602611Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 30 00:02:20.623965 kubelet[2706]: E1030 00:02:20.623917 2706 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 00:02:20.624115 kubelet[2706]: E1030 00:02:20.624099 2706 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 00:02:20.624944 kubelet[2706]: E1030 00:02:20.624849 2706 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m4j6w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-4nhcg_calico-system(640bc622-db68-4e4b-a017-7a6f5994fc43): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 30 00:02:20.626211 kubelet[2706]: E1030 00:02:20.626137 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4nhcg" podUID="640bc622-db68-4e4b-a017-7a6f5994fc43" Oct 30 00:02:20.637959 containerd[1550]: time="2025-10-30T00:02:20.637846828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 00:02:20.663335 systemd-networkd[1423]: cali68f0667c2bd: Gained IPv6LL Oct 30 00:02:20.685575 containerd[1550]: time="2025-10-30T00:02:20.685133042Z" level=info msg="connecting to shim ad2d3902fc1e5460c40a90907ddbcaf658a7caad40677a2cf62f43fd28526673" address="unix:///run/containerd/s/0f572c835be7690196140d4a9ef263875d4749bf140ddcaa56888177f6338eba" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:02:20.751774 systemd[1]: Started cri-containerd-ad2d3902fc1e5460c40a90907ddbcaf658a7caad40677a2cf62f43fd28526673.scope - libcontainer container ad2d3902fc1e5460c40a90907ddbcaf658a7caad40677a2cf62f43fd28526673. Oct 30 00:02:20.776472 kubelet[2706]: E1030 00:02:20.776334 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4nhcg" podUID="640bc622-db68-4e4b-a017-7a6f5994fc43" Oct 30 00:02:20.790961 kubelet[2706]: E1030 00:02:20.790922 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:02:20.791408 systemd-networkd[1423]: cali94028d7115a: Gained IPv6LL Oct 30 00:02:20.797261 kubelet[2706]: E1030 00:02:20.797035 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5kjbz" podUID="7c170e9a-cada-41cd-bd7c-14ab708f01d4" Oct 30 00:02:20.954120 containerd[1550]: time="2025-10-30T00:02:20.953513786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74894d766f-zrklx,Uid:3d001f0c-0d5b-4f3a-ac85-24b0d4c18012,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ad2d3902fc1e5460c40a90907ddbcaf658a7caad40677a2cf62f43fd28526673\"" Oct 30 00:02:21.044964 containerd[1550]: time="2025-10-30T00:02:21.044847114Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:02:21.045735 containerd[1550]: time="2025-10-30T00:02:21.045670461Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 00:02:21.045842 containerd[1550]: time="2025-10-30T00:02:21.045774588Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 00:02:21.046049 kubelet[2706]: E1030 00:02:21.045995 2706 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:02:21.046135 kubelet[2706]: E1030 00:02:21.046067 2706 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:02:21.046394 kubelet[2706]: E1030 00:02:21.046346 2706 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2rqfc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74894d766f-gm29g_calico-apiserver(040b195d-6ed8-46f6-9e09-7aab95a4cc1d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 00:02:21.046869 containerd[1550]: time="2025-10-30T00:02:21.046812103Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 30 00:02:21.047852 kubelet[2706]: E1030 00:02:21.047801 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74894d766f-gm29g" podUID="040b195d-6ed8-46f6-9e09-7aab95a4cc1d" Oct 30 00:02:21.175299 systemd-networkd[1423]: calie7360908ed5: Gained IPv6LL Oct 30 00:02:21.382972 containerd[1550]: time="2025-10-30T00:02:21.382908471Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:02:21.383933 containerd[1550]: time="2025-10-30T00:02:21.383873135Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 30 00:02:21.384123 containerd[1550]: time="2025-10-30T00:02:21.383930538Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 30 00:02:21.384401 kubelet[2706]: E1030 00:02:21.384355 2706 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 00:02:21.384498 kubelet[2706]: E1030 00:02:21.384412 2706 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 00:02:21.384967 kubelet[2706]: E1030 00:02:21.384748 2706 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2zwvh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-8cb7cb6c6-ml8rk_calico-system(9a28bd97-c5f6-423b-9db0-587ee8384ffe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 30 00:02:21.385148 containerd[1550]: time="2025-10-30T00:02:21.384951243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 00:02:21.386666 kubelet[2706]: E1030 00:02:21.386622 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8cb7cb6c6-ml8rk" podUID="9a28bd97-c5f6-423b-9db0-587ee8384ffe" Oct 30 00:02:21.735693 containerd[1550]: time="2025-10-30T00:02:21.735177109Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:02:21.736242 containerd[1550]: time="2025-10-30T00:02:21.736172286Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 00:02:21.736321 containerd[1550]: time="2025-10-30T00:02:21.736253976Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 00:02:21.736653 kubelet[2706]: E1030 00:02:21.736597 2706 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:02:21.736731 kubelet[2706]: E1030 00:02:21.736667 2706 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:02:21.737137 kubelet[2706]: E1030 00:02:21.736838 2706 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5ckzb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74894d766f-zrklx_calico-apiserver(3d001f0c-0d5b-4f3a-ac85-24b0d4c18012): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 00:02:21.738287 kubelet[2706]: E1030 00:02:21.738242 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74894d766f-zrklx" podUID="3d001f0c-0d5b-4f3a-ac85-24b0d4c18012" Oct 30 00:02:21.796955 kubelet[2706]: E1030 00:02:21.796043 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:02:21.798041 kubelet[2706]: E1030 00:02:21.797987 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74894d766f-gm29g" podUID="040b195d-6ed8-46f6-9e09-7aab95a4cc1d" Oct 30 00:02:21.798534 kubelet[2706]: E1030 00:02:21.798300 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8cb7cb6c6-ml8rk" podUID="9a28bd97-c5f6-423b-9db0-587ee8384ffe" Oct 30 00:02:21.798649 kubelet[2706]: E1030 00:02:21.798366 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74894d766f-zrklx" podUID="3d001f0c-0d5b-4f3a-ac85-24b0d4c18012" Oct 30 00:02:21.799759 kubelet[2706]: E1030 00:02:21.799162 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4nhcg" podUID="640bc622-db68-4e4b-a017-7a6f5994fc43" Oct 30 00:02:21.817294 kubelet[2706]: I1030 00:02:21.817110 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-76jv2" podStartSLOduration=47.817085875 podStartE2EDuration="47.817085875s" podCreationTimestamp="2025-10-30 00:01:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 00:02:20.84268358 +0000 UTC m=+52.588479357" watchObservedRunningTime="2025-10-30 00:02:21.817085875 +0000 UTC m=+53.562881644" Oct 30 00:02:21.880299 systemd-networkd[1423]: calia33985f5a1c: Gained IPv6LL Oct 30 00:02:22.799895 kubelet[2706]: E1030 00:02:22.799252 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74894d766f-zrklx" podUID="3d001f0c-0d5b-4f3a-ac85-24b0d4c18012" Oct 30 00:02:22.799895 kubelet[2706]: E1030 00:02:22.799424 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:02:23.801063 kubelet[2706]: E1030 00:02:23.801020 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:02:27.422145 containerd[1550]: time="2025-10-30T00:02:27.421812744Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 30 00:02:27.764395 containerd[1550]: time="2025-10-30T00:02:27.764159456Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:02:27.765607 containerd[1550]: time="2025-10-30T00:02:27.765497261Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 30 00:02:27.765607 containerd[1550]: time="2025-10-30T00:02:27.765564843Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 30 00:02:27.765916 kubelet[2706]: E1030 00:02:27.765855 2706 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 00:02:27.766497 kubelet[2706]: E1030 00:02:27.765912 2706 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 00:02:27.766497 kubelet[2706]: E1030 00:02:27.766062 2706 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:928f80f085f744a79c4bc50cfb2d04dd,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zcfvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6b8bf69cc7-2jwmd_calico-system(1d51aa8e-d9c7-44fd-8bd6-4ee88452e454): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 30 00:02:27.769351 containerd[1550]: time="2025-10-30T00:02:27.769250065Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 30 00:02:28.126668 containerd[1550]: time="2025-10-30T00:02:28.126600276Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:02:28.127633 containerd[1550]: time="2025-10-30T00:02:28.127582122Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 30 00:02:28.127802 containerd[1550]: time="2025-10-30T00:02:28.127613853Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 30 00:02:28.127880 kubelet[2706]: E1030 00:02:28.127839 2706 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 00:02:28.127940 kubelet[2706]: E1030 00:02:28.127893 2706 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 00:02:28.128103 kubelet[2706]: E1030 00:02:28.128027 2706 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zcfvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6b8bf69cc7-2jwmd_calico-system(1d51aa8e-d9c7-44fd-8bd6-4ee88452e454): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 30 00:02:28.129899 kubelet[2706]: E1030 00:02:28.129819 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b8bf69cc7-2jwmd" podUID="1d51aa8e-d9c7-44fd-8bd6-4ee88452e454" Oct 30 00:02:29.313371 systemd[1]: Started sshd@7-143.198.78.203:22-139.178.89.65:56358.service - OpenSSH per-connection server daemon (139.178.89.65:56358). Oct 30 00:02:29.448732 sshd[4754]: Accepted publickey for core from 139.178.89.65 port 56358 ssh2: RSA SHA256:R36h6avakroD4W10ylGeMiic55sH3vtiJobaKN4s5do Oct 30 00:02:29.450742 sshd-session[4754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:02:29.457393 systemd-logind[1516]: New session 8 of user core. Oct 30 00:02:29.461359 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 30 00:02:29.940373 sshd[4757]: Connection closed by 139.178.89.65 port 56358 Oct 30 00:02:29.941427 sshd-session[4754]: pam_unix(sshd:session): session closed for user core Oct 30 00:02:29.950939 systemd[1]: sshd@7-143.198.78.203:22-139.178.89.65:56358.service: Deactivated successfully. Oct 30 00:02:29.957893 systemd[1]: session-8.scope: Deactivated successfully. Oct 30 00:02:29.965868 systemd-logind[1516]: Session 8 logged out. Waiting for processes to exit. Oct 30 00:02:29.968700 systemd-logind[1516]: Removed session 8. Oct 30 00:02:32.421060 containerd[1550]: time="2025-10-30T00:02:32.421004582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 00:02:32.783054 containerd[1550]: time="2025-10-30T00:02:32.782955831Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:02:32.783855 containerd[1550]: time="2025-10-30T00:02:32.783788329Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 00:02:32.784044 containerd[1550]: time="2025-10-30T00:02:32.783919512Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 00:02:32.784304 kubelet[2706]: E1030 00:02:32.784242 2706 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:02:32.784863 kubelet[2706]: E1030 00:02:32.784313 2706 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:02:32.784863 kubelet[2706]: E1030 00:02:32.784466 2706 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2rqfc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74894d766f-gm29g_calico-apiserver(040b195d-6ed8-46f6-9e09-7aab95a4cc1d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 00:02:32.786063 kubelet[2706]: E1030 00:02:32.786012 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74894d766f-gm29g" podUID="040b195d-6ed8-46f6-9e09-7aab95a4cc1d" Oct 30 00:02:34.423266 containerd[1550]: time="2025-10-30T00:02:34.423165348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 00:02:34.786280 containerd[1550]: time="2025-10-30T00:02:34.786220620Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:02:34.786969 containerd[1550]: time="2025-10-30T00:02:34.786916255Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 00:02:34.787041 containerd[1550]: time="2025-10-30T00:02:34.787026594Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 00:02:34.787346 kubelet[2706]: E1030 00:02:34.787270 2706 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:02:34.787346 kubelet[2706]: E1030 00:02:34.787327 2706 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:02:34.788298 kubelet[2706]: E1030 00:02:34.788130 2706 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5ckzb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74894d766f-zrklx_calico-apiserver(3d001f0c-0d5b-4f3a-ac85-24b0d4c18012): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 00:02:34.788538 containerd[1550]: time="2025-10-30T00:02:34.788287447Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 30 00:02:34.790053 kubelet[2706]: E1030 00:02:34.789935 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74894d766f-zrklx" podUID="3d001f0c-0d5b-4f3a-ac85-24b0d4c18012" Oct 30 00:02:34.957249 systemd[1]: Started sshd@8-143.198.78.203:22-139.178.89.65:56370.service - OpenSSH per-connection server daemon (139.178.89.65:56370). Oct 30 00:02:35.035234 sshd[4778]: Accepted publickey for core from 139.178.89.65 port 56370 ssh2: RSA SHA256:R36h6avakroD4W10ylGeMiic55sH3vtiJobaKN4s5do Oct 30 00:02:35.037519 sshd-session[4778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:02:35.047308 systemd-logind[1516]: New session 9 of user core. Oct 30 00:02:35.052428 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 30 00:02:35.145257 containerd[1550]: time="2025-10-30T00:02:35.145209823Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:02:35.148103 containerd[1550]: time="2025-10-30T00:02:35.146563504Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 30 00:02:35.148103 containerd[1550]: time="2025-10-30T00:02:35.146637169Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 30 00:02:35.148283 kubelet[2706]: E1030 00:02:35.146824 2706 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 00:02:35.148283 kubelet[2706]: E1030 00:02:35.146896 2706 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 00:02:35.148283 kubelet[2706]: E1030 00:02:35.147025 2706 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9gfhq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5kjbz_calico-system(7c170e9a-cada-41cd-bd7c-14ab708f01d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 30 00:02:35.149851 containerd[1550]: time="2025-10-30T00:02:35.149728051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 30 00:02:35.220535 sshd[4782]: Connection closed by 139.178.89.65 port 56370 Oct 30 00:02:35.220968 sshd-session[4778]: pam_unix(sshd:session): session closed for user core Oct 30 00:02:35.231756 systemd[1]: sshd@8-143.198.78.203:22-139.178.89.65:56370.service: Deactivated successfully. Oct 30 00:02:35.234897 systemd[1]: session-9.scope: Deactivated successfully. Oct 30 00:02:35.236296 systemd-logind[1516]: Session 9 logged out. Waiting for processes to exit. Oct 30 00:02:35.238703 systemd-logind[1516]: Removed session 9. Oct 30 00:02:35.486962 containerd[1550]: time="2025-10-30T00:02:35.486795532Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:02:35.488175 containerd[1550]: time="2025-10-30T00:02:35.488109446Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 30 00:02:35.488314 containerd[1550]: time="2025-10-30T00:02:35.488124361Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 30 00:02:35.488602 kubelet[2706]: E1030 00:02:35.488528 2706 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 00:02:35.488805 kubelet[2706]: E1030 00:02:35.488778 2706 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 00:02:35.489811 containerd[1550]: time="2025-10-30T00:02:35.489502607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 30 00:02:35.491580 kubelet[2706]: E1030 00:02:35.491349 2706 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9gfhq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5kjbz_calico-system(7c170e9a-cada-41cd-bd7c-14ab708f01d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 30 00:02:35.493106 kubelet[2706]: E1030 00:02:35.492987 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5kjbz" podUID="7c170e9a-cada-41cd-bd7c-14ab708f01d4" Oct 30 00:02:35.824590 containerd[1550]: time="2025-10-30T00:02:35.824527120Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:02:35.825435 containerd[1550]: time="2025-10-30T00:02:35.825347362Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 30 00:02:35.825435 containerd[1550]: time="2025-10-30T00:02:35.825398862Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 30 00:02:35.825706 kubelet[2706]: E1030 00:02:35.825661 2706 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 00:02:35.826201 kubelet[2706]: E1030 00:02:35.825723 2706 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 00:02:35.826201 kubelet[2706]: E1030 00:02:35.825955 2706 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m4j6w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-4nhcg_calico-system(640bc622-db68-4e4b-a017-7a6f5994fc43): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 30 00:02:35.827836 kubelet[2706]: E1030 00:02:35.827785 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4nhcg" podUID="640bc622-db68-4e4b-a017-7a6f5994fc43" Oct 30 00:02:36.424515 containerd[1550]: time="2025-10-30T00:02:36.423593491Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 30 00:02:36.771484 containerd[1550]: time="2025-10-30T00:02:36.771323351Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:02:36.773105 containerd[1550]: time="2025-10-30T00:02:36.772981253Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 30 00:02:36.773105 containerd[1550]: time="2025-10-30T00:02:36.773037484Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 30 00:02:36.773694 kubelet[2706]: E1030 00:02:36.773316 2706 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 00:02:36.773694 kubelet[2706]: E1030 00:02:36.773382 2706 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 00:02:36.773930 kubelet[2706]: E1030 00:02:36.773855 2706 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2zwvh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-8cb7cb6c6-ml8rk_calico-system(9a28bd97-c5f6-423b-9db0-587ee8384ffe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 30 00:02:36.775675 kubelet[2706]: E1030 00:02:36.775619 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8cb7cb6c6-ml8rk" podUID="9a28bd97-c5f6-423b-9db0-587ee8384ffe" Oct 30 00:02:40.237312 systemd[1]: Started sshd@9-143.198.78.203:22-139.178.89.65:53864.service - OpenSSH per-connection server daemon (139.178.89.65:53864). Oct 30 00:02:40.311133 sshd[4797]: Accepted publickey for core from 139.178.89.65 port 53864 ssh2: RSA SHA256:R36h6avakroD4W10ylGeMiic55sH3vtiJobaKN4s5do Oct 30 00:02:40.312471 sshd-session[4797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:02:40.320311 systemd-logind[1516]: New session 10 of user core. Oct 30 00:02:40.327292 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 30 00:02:40.470333 sshd[4800]: Connection closed by 139.178.89.65 port 53864 Oct 30 00:02:40.471289 sshd-session[4797]: pam_unix(sshd:session): session closed for user core Oct 30 00:02:40.476014 systemd[1]: sshd@9-143.198.78.203:22-139.178.89.65:53864.service: Deactivated successfully. Oct 30 00:02:40.479681 systemd[1]: session-10.scope: Deactivated successfully. Oct 30 00:02:40.481915 systemd-logind[1516]: Session 10 logged out. Waiting for processes to exit. Oct 30 00:02:40.485527 systemd-logind[1516]: Removed session 10. Oct 30 00:02:42.425346 kubelet[2706]: E1030 00:02:42.425244 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b8bf69cc7-2jwmd" podUID="1d51aa8e-d9c7-44fd-8bd6-4ee88452e454" Oct 30 00:02:44.027040 containerd[1550]: time="2025-10-30T00:02:44.026990062Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bfd0a574ba0aa16712469af1cfc78a19606dfd59786bc095c4a04b304744f7f2\" id:\"270633ac60489bc67373ae4ea919acaf59b1193137c2f5db0543ba29c446341d\" pid:4828 exited_at:{seconds:1761782564 nanos:26322004}" Oct 30 00:02:44.031866 kubelet[2706]: E1030 00:02:44.031769 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:02:45.421017 kubelet[2706]: E1030 00:02:45.420828 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74894d766f-zrklx" podUID="3d001f0c-0d5b-4f3a-ac85-24b0d4c18012" Oct 30 00:02:45.484582 systemd[1]: Started sshd@10-143.198.78.203:22-139.178.89.65:53870.service - OpenSSH per-connection server daemon (139.178.89.65:53870). Oct 30 00:02:45.557697 sshd[4841]: Accepted publickey for core from 139.178.89.65 port 53870 ssh2: RSA SHA256:R36h6avakroD4W10ylGeMiic55sH3vtiJobaKN4s5do Oct 30 00:02:45.559592 sshd-session[4841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:02:45.566015 systemd-logind[1516]: New session 11 of user core. Oct 30 00:02:45.574411 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 30 00:02:45.732326 sshd[4844]: Connection closed by 139.178.89.65 port 53870 Oct 30 00:02:45.733299 sshd-session[4841]: pam_unix(sshd:session): session closed for user core Oct 30 00:02:45.744059 systemd[1]: sshd@10-143.198.78.203:22-139.178.89.65:53870.service: Deactivated successfully. Oct 30 00:02:45.747323 systemd[1]: session-11.scope: Deactivated successfully. Oct 30 00:02:45.748576 systemd-logind[1516]: Session 11 logged out. Waiting for processes to exit. Oct 30 00:02:45.753902 systemd[1]: Started sshd@11-143.198.78.203:22-139.178.89.65:53878.service - OpenSSH per-connection server daemon (139.178.89.65:53878). Oct 30 00:02:45.756171 systemd-logind[1516]: Removed session 11. Oct 30 00:02:45.836400 sshd[4856]: Accepted publickey for core from 139.178.89.65 port 53878 ssh2: RSA SHA256:R36h6avakroD4W10ylGeMiic55sH3vtiJobaKN4s5do Oct 30 00:02:45.838315 sshd-session[4856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:02:45.845793 systemd-logind[1516]: New session 12 of user core. Oct 30 00:02:45.850367 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 30 00:02:46.050003 sshd[4859]: Connection closed by 139.178.89.65 port 53878 Oct 30 00:02:46.053357 sshd-session[4856]: pam_unix(sshd:session): session closed for user core Oct 30 00:02:46.071369 systemd[1]: sshd@11-143.198.78.203:22-139.178.89.65:53878.service: Deactivated successfully. Oct 30 00:02:46.078181 systemd[1]: session-12.scope: Deactivated successfully. Oct 30 00:02:46.084423 systemd-logind[1516]: Session 12 logged out. Waiting for processes to exit. Oct 30 00:02:46.090544 systemd[1]: Started sshd@12-143.198.78.203:22-139.178.89.65:45766.service - OpenSSH per-connection server daemon (139.178.89.65:45766). Oct 30 00:02:46.096165 systemd-logind[1516]: Removed session 12. Oct 30 00:02:46.181944 sshd[4869]: Accepted publickey for core from 139.178.89.65 port 45766 ssh2: RSA SHA256:R36h6avakroD4W10ylGeMiic55sH3vtiJobaKN4s5do Oct 30 00:02:46.184655 sshd-session[4869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:02:46.193781 systemd-logind[1516]: New session 13 of user core. Oct 30 00:02:46.199392 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 30 00:02:46.369909 sshd[4872]: Connection closed by 139.178.89.65 port 45766 Oct 30 00:02:46.371006 sshd-session[4869]: pam_unix(sshd:session): session closed for user core Oct 30 00:02:46.376985 systemd[1]: sshd@12-143.198.78.203:22-139.178.89.65:45766.service: Deactivated successfully. Oct 30 00:02:46.379814 systemd[1]: session-13.scope: Deactivated successfully. Oct 30 00:02:46.382966 systemd-logind[1516]: Session 13 logged out. Waiting for processes to exit. Oct 30 00:02:46.385641 systemd-logind[1516]: Removed session 13. Oct 30 00:02:46.421162 kubelet[2706]: E1030 00:02:46.419761 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:02:46.423258 kubelet[2706]: E1030 00:02:46.423186 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74894d766f-gm29g" podUID="040b195d-6ed8-46f6-9e09-7aab95a4cc1d" Oct 30 00:02:47.421656 kubelet[2706]: E1030 00:02:47.421602 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4nhcg" podUID="640bc622-db68-4e4b-a017-7a6f5994fc43" Oct 30 00:02:48.430009 kubelet[2706]: E1030 00:02:48.429866 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8cb7cb6c6-ml8rk" podUID="9a28bd97-c5f6-423b-9db0-587ee8384ffe" Oct 30 00:02:48.432229 kubelet[2706]: E1030 00:02:48.430741 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5kjbz" podUID="7c170e9a-cada-41cd-bd7c-14ab708f01d4" Oct 30 00:02:49.419315 kubelet[2706]: E1030 00:02:49.419270 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:02:51.387306 systemd[1]: Started sshd@13-143.198.78.203:22-139.178.89.65:45774.service - OpenSSH per-connection server daemon (139.178.89.65:45774). Oct 30 00:02:51.454347 sshd[4895]: Accepted publickey for core from 139.178.89.65 port 45774 ssh2: RSA SHA256:R36h6avakroD4W10ylGeMiic55sH3vtiJobaKN4s5do Oct 30 00:02:51.456409 sshd-session[4895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:02:51.463209 systemd-logind[1516]: New session 14 of user core. Oct 30 00:02:51.467347 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 30 00:02:51.610155 sshd[4898]: Connection closed by 139.178.89.65 port 45774 Oct 30 00:02:51.610776 sshd-session[4895]: pam_unix(sshd:session): session closed for user core Oct 30 00:02:51.617583 systemd-logind[1516]: Session 14 logged out. Waiting for processes to exit. Oct 30 00:02:51.617680 systemd[1]: sshd@13-143.198.78.203:22-139.178.89.65:45774.service: Deactivated successfully. Oct 30 00:02:51.621934 systemd[1]: session-14.scope: Deactivated successfully. Oct 30 00:02:51.625810 systemd-logind[1516]: Removed session 14. Oct 30 00:02:53.419566 kubelet[2706]: E1030 00:02:53.419431 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:02:54.422629 containerd[1550]: time="2025-10-30T00:02:54.422578251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 30 00:02:54.794831 containerd[1550]: time="2025-10-30T00:02:54.794773244Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:02:54.795501 containerd[1550]: time="2025-10-30T00:02:54.795450828Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 30 00:02:54.795600 containerd[1550]: time="2025-10-30T00:02:54.795540537Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 30 00:02:54.795822 kubelet[2706]: E1030 00:02:54.795785 2706 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 00:02:54.796780 kubelet[2706]: E1030 00:02:54.796393 2706 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 00:02:54.796780 kubelet[2706]: E1030 00:02:54.796711 2706 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:928f80f085f744a79c4bc50cfb2d04dd,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zcfvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6b8bf69cc7-2jwmd_calico-system(1d51aa8e-d9c7-44fd-8bd6-4ee88452e454): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 30 00:02:54.799865 containerd[1550]: time="2025-10-30T00:02:54.799797053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 30 00:02:55.151866 containerd[1550]: time="2025-10-30T00:02:55.151694629Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:02:55.152472 containerd[1550]: time="2025-10-30T00:02:55.152411583Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 30 00:02:55.152667 containerd[1550]: time="2025-10-30T00:02:55.152619324Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 30 00:02:55.152955 kubelet[2706]: E1030 00:02:55.152908 2706 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 00:02:55.153037 kubelet[2706]: E1030 00:02:55.152964 2706 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 00:02:55.153176 kubelet[2706]: E1030 00:02:55.153115 2706 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zcfvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6b8bf69cc7-2jwmd_calico-system(1d51aa8e-d9c7-44fd-8bd6-4ee88452e454): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 30 00:02:55.154717 kubelet[2706]: E1030 00:02:55.154672 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b8bf69cc7-2jwmd" podUID="1d51aa8e-d9c7-44fd-8bd6-4ee88452e454" Oct 30 00:02:56.627142 systemd[1]: Started sshd@14-143.198.78.203:22-139.178.89.65:48482.service - OpenSSH per-connection server daemon (139.178.89.65:48482). Oct 30 00:02:56.745363 sshd[4916]: Accepted publickey for core from 139.178.89.65 port 48482 ssh2: RSA SHA256:R36h6avakroD4W10ylGeMiic55sH3vtiJobaKN4s5do Oct 30 00:02:56.747658 sshd-session[4916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:02:56.754207 systemd-logind[1516]: New session 15 of user core. Oct 30 00:02:56.762376 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 30 00:02:56.975589 sshd[4919]: Connection closed by 139.178.89.65 port 48482 Oct 30 00:02:56.976736 sshd-session[4916]: pam_unix(sshd:session): session closed for user core Oct 30 00:02:56.983286 systemd[1]: sshd@14-143.198.78.203:22-139.178.89.65:48482.service: Deactivated successfully. Oct 30 00:02:56.986703 systemd[1]: session-15.scope: Deactivated successfully. Oct 30 00:02:56.988866 systemd-logind[1516]: Session 15 logged out. Waiting for processes to exit. Oct 30 00:02:56.991415 systemd-logind[1516]: Removed session 15. Oct 30 00:02:59.420735 kubelet[2706]: E1030 00:02:59.419546 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:02:59.422349 containerd[1550]: time="2025-10-30T00:02:59.422307946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 00:02:59.745224 containerd[1550]: time="2025-10-30T00:02:59.744990640Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:02:59.746034 containerd[1550]: time="2025-10-30T00:02:59.745982815Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 00:02:59.746198 containerd[1550]: time="2025-10-30T00:02:59.746101302Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 00:02:59.746343 kubelet[2706]: E1030 00:02:59.746283 2706 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:02:59.746343 kubelet[2706]: E1030 00:02:59.746333 2706 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:02:59.746556 kubelet[2706]: E1030 00:02:59.746508 2706 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2rqfc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74894d766f-gm29g_calico-apiserver(040b195d-6ed8-46f6-9e09-7aab95a4cc1d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 00:02:59.747961 kubelet[2706]: E1030 00:02:59.747883 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74894d766f-gm29g" podUID="040b195d-6ed8-46f6-9e09-7aab95a4cc1d" Oct 30 00:03:00.426304 containerd[1550]: time="2025-10-30T00:03:00.426243669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 30 00:03:00.761024 containerd[1550]: time="2025-10-30T00:03:00.760768187Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:03:00.762165 containerd[1550]: time="2025-10-30T00:03:00.762005597Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 30 00:03:00.762351 containerd[1550]: time="2025-10-30T00:03:00.762232340Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 30 00:03:00.762619 kubelet[2706]: E1030 00:03:00.762565 2706 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 00:03:00.763877 kubelet[2706]: E1030 00:03:00.763098 2706 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 00:03:00.764014 containerd[1550]: time="2025-10-30T00:03:00.763628585Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 00:03:00.769201 kubelet[2706]: E1030 00:03:00.769107 2706 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m4j6w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-4nhcg_calico-system(640bc622-db68-4e4b-a017-7a6f5994fc43): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 30 00:03:00.770459 kubelet[2706]: E1030 00:03:00.770386 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4nhcg" podUID="640bc622-db68-4e4b-a017-7a6f5994fc43" Oct 30 00:03:01.140778 containerd[1550]: time="2025-10-30T00:03:01.140577010Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:03:01.142059 containerd[1550]: time="2025-10-30T00:03:01.141952554Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 00:03:01.142294 containerd[1550]: time="2025-10-30T00:03:01.142165452Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 00:03:01.143121 kubelet[2706]: E1030 00:03:01.142624 2706 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:03:01.143121 kubelet[2706]: E1030 00:03:01.142703 2706 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:03:01.143121 kubelet[2706]: E1030 00:03:01.142917 2706 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5ckzb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-74894d766f-zrklx_calico-apiserver(3d001f0c-0d5b-4f3a-ac85-24b0d4c18012): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 00:03:01.144296 kubelet[2706]: E1030 00:03:01.144214 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74894d766f-zrklx" podUID="3d001f0c-0d5b-4f3a-ac85-24b0d4c18012" Oct 30 00:03:01.996684 systemd[1]: Started sshd@15-143.198.78.203:22-139.178.89.65:48492.service - OpenSSH per-connection server daemon (139.178.89.65:48492). Oct 30 00:03:02.083594 sshd[4933]: Accepted publickey for core from 139.178.89.65 port 48492 ssh2: RSA SHA256:R36h6avakroD4W10ylGeMiic55sH3vtiJobaKN4s5do Oct 30 00:03:02.086059 sshd-session[4933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:03:02.095189 systemd-logind[1516]: New session 16 of user core. Oct 30 00:03:02.100591 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 30 00:03:02.305503 sshd[4936]: Connection closed by 139.178.89.65 port 48492 Oct 30 00:03:02.306610 sshd-session[4933]: pam_unix(sshd:session): session closed for user core Oct 30 00:03:02.317673 systemd[1]: sshd@15-143.198.78.203:22-139.178.89.65:48492.service: Deactivated successfully. Oct 30 00:03:02.322632 systemd[1]: session-16.scope: Deactivated successfully. Oct 30 00:03:02.327639 systemd-logind[1516]: Session 16 logged out. Waiting for processes to exit. Oct 30 00:03:02.330954 systemd-logind[1516]: Removed session 16. Oct 30 00:03:02.422784 containerd[1550]: time="2025-10-30T00:03:02.422652371Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 30 00:03:02.766031 containerd[1550]: time="2025-10-30T00:03:02.765340953Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:03:02.767302 containerd[1550]: time="2025-10-30T00:03:02.767236778Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 30 00:03:02.767460 containerd[1550]: time="2025-10-30T00:03:02.767390874Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 30 00:03:02.768228 kubelet[2706]: E1030 00:03:02.768103 2706 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 00:03:02.769541 kubelet[2706]: E1030 00:03:02.768198 2706 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 00:03:02.769541 kubelet[2706]: E1030 00:03:02.769424 2706 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9gfhq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5kjbz_calico-system(7c170e9a-cada-41cd-bd7c-14ab708f01d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 30 00:03:02.773567 containerd[1550]: time="2025-10-30T00:03:02.773483313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 30 00:03:03.321610 containerd[1550]: time="2025-10-30T00:03:03.321304364Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:03:03.322649 containerd[1550]: time="2025-10-30T00:03:03.322205681Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 30 00:03:03.322649 containerd[1550]: time="2025-10-30T00:03:03.322232831Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 30 00:03:03.323249 kubelet[2706]: E1030 00:03:03.323174 2706 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 00:03:03.324241 kubelet[2706]: E1030 00:03:03.323930 2706 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 00:03:03.325302 kubelet[2706]: E1030 00:03:03.324961 2706 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9gfhq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5kjbz_calico-system(7c170e9a-cada-41cd-bd7c-14ab708f01d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 30 00:03:03.326860 kubelet[2706]: E1030 00:03:03.326772 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5kjbz" podUID="7c170e9a-cada-41cd-bd7c-14ab708f01d4" Oct 30 00:03:03.421460 containerd[1550]: time="2025-10-30T00:03:03.421395124Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 30 00:03:03.774522 containerd[1550]: time="2025-10-30T00:03:03.774237410Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:03:03.776099 containerd[1550]: time="2025-10-30T00:03:03.775929647Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 30 00:03:03.776099 containerd[1550]: time="2025-10-30T00:03:03.775932254Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 30 00:03:03.776870 kubelet[2706]: E1030 00:03:03.776758 2706 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 00:03:03.777840 kubelet[2706]: E1030 00:03:03.777480 2706 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 00:03:03.778023 kubelet[2706]: E1030 00:03:03.777784 2706 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2zwvh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-8cb7cb6c6-ml8rk_calico-system(9a28bd97-c5f6-423b-9db0-587ee8384ffe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 30 00:03:03.779353 kubelet[2706]: E1030 00:03:03.779290 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8cb7cb6c6-ml8rk" podUID="9a28bd97-c5f6-423b-9db0-587ee8384ffe" Oct 30 00:03:06.422503 kubelet[2706]: E1030 00:03:06.422432 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b8bf69cc7-2jwmd" podUID="1d51aa8e-d9c7-44fd-8bd6-4ee88452e454" Oct 30 00:03:07.322785 systemd[1]: Started sshd@16-143.198.78.203:22-139.178.89.65:51422.service - OpenSSH per-connection server daemon (139.178.89.65:51422). Oct 30 00:03:07.408743 sshd[4950]: Accepted publickey for core from 139.178.89.65 port 51422 ssh2: RSA SHA256:R36h6avakroD4W10ylGeMiic55sH3vtiJobaKN4s5do Oct 30 00:03:07.410380 sshd-session[4950]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:03:07.416950 systemd-logind[1516]: New session 17 of user core. Oct 30 00:03:07.421403 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 30 00:03:07.578719 sshd[4953]: Connection closed by 139.178.89.65 port 51422 Oct 30 00:03:07.579707 sshd-session[4950]: pam_unix(sshd:session): session closed for user core Oct 30 00:03:07.592394 systemd[1]: sshd@16-143.198.78.203:22-139.178.89.65:51422.service: Deactivated successfully. Oct 30 00:03:07.595937 systemd[1]: session-17.scope: Deactivated successfully. Oct 30 00:03:07.597896 systemd-logind[1516]: Session 17 logged out. Waiting for processes to exit. Oct 30 00:03:07.601103 systemd-logind[1516]: Removed session 17. Oct 30 00:03:07.602686 systemd[1]: Started sshd@17-143.198.78.203:22-139.178.89.65:51434.service - OpenSSH per-connection server daemon (139.178.89.65:51434). Oct 30 00:03:07.675745 sshd[4964]: Accepted publickey for core from 139.178.89.65 port 51434 ssh2: RSA SHA256:R36h6avakroD4W10ylGeMiic55sH3vtiJobaKN4s5do Oct 30 00:03:07.677782 sshd-session[4964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:03:07.683212 systemd-logind[1516]: New session 18 of user core. Oct 30 00:03:07.689323 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 30 00:03:07.999770 sshd[4967]: Connection closed by 139.178.89.65 port 51434 Oct 30 00:03:08.001611 sshd-session[4964]: pam_unix(sshd:session): session closed for user core Oct 30 00:03:08.014430 systemd[1]: sshd@17-143.198.78.203:22-139.178.89.65:51434.service: Deactivated successfully. Oct 30 00:03:08.017454 systemd[1]: session-18.scope: Deactivated successfully. Oct 30 00:03:08.021708 systemd-logind[1516]: Session 18 logged out. Waiting for processes to exit. Oct 30 00:03:08.025138 systemd[1]: Started sshd@18-143.198.78.203:22-139.178.89.65:51446.service - OpenSSH per-connection server daemon (139.178.89.65:51446). Oct 30 00:03:08.026486 systemd-logind[1516]: Removed session 18. Oct 30 00:03:08.129629 sshd[4977]: Accepted publickey for core from 139.178.89.65 port 51446 ssh2: RSA SHA256:R36h6avakroD4W10ylGeMiic55sH3vtiJobaKN4s5do Oct 30 00:03:08.131606 sshd-session[4977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:03:08.139179 systemd-logind[1516]: New session 19 of user core. Oct 30 00:03:08.144474 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 30 00:03:08.823629 sshd[4980]: Connection closed by 139.178.89.65 port 51446 Oct 30 00:03:08.826336 sshd-session[4977]: pam_unix(sshd:session): session closed for user core Oct 30 00:03:08.841767 systemd[1]: sshd@18-143.198.78.203:22-139.178.89.65:51446.service: Deactivated successfully. Oct 30 00:03:08.845830 systemd[1]: session-19.scope: Deactivated successfully. Oct 30 00:03:08.848272 systemd-logind[1516]: Session 19 logged out. Waiting for processes to exit. Oct 30 00:03:08.853893 systemd[1]: Started sshd@19-143.198.78.203:22-139.178.89.65:51452.service - OpenSSH per-connection server daemon (139.178.89.65:51452). Oct 30 00:03:08.855398 systemd-logind[1516]: Removed session 19. Oct 30 00:03:08.991234 sshd[4995]: Accepted publickey for core from 139.178.89.65 port 51452 ssh2: RSA SHA256:R36h6avakroD4W10ylGeMiic55sH3vtiJobaKN4s5do Oct 30 00:03:08.993579 sshd-session[4995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:03:09.001047 systemd-logind[1516]: New session 20 of user core. Oct 30 00:03:09.006367 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 30 00:03:09.445880 sshd[5000]: Connection closed by 139.178.89.65 port 51452 Oct 30 00:03:09.446357 sshd-session[4995]: pam_unix(sshd:session): session closed for user core Oct 30 00:03:09.460212 systemd[1]: sshd@19-143.198.78.203:22-139.178.89.65:51452.service: Deactivated successfully. Oct 30 00:03:09.466283 systemd[1]: session-20.scope: Deactivated successfully. Oct 30 00:03:09.467718 systemd-logind[1516]: Session 20 logged out. Waiting for processes to exit. Oct 30 00:03:09.471433 systemd-logind[1516]: Removed session 20. Oct 30 00:03:09.474279 systemd[1]: Started sshd@20-143.198.78.203:22-139.178.89.65:51462.service - OpenSSH per-connection server daemon (139.178.89.65:51462). Oct 30 00:03:09.563609 sshd[5010]: Accepted publickey for core from 139.178.89.65 port 51462 ssh2: RSA SHA256:R36h6avakroD4W10ylGeMiic55sH3vtiJobaKN4s5do Oct 30 00:03:09.565510 sshd-session[5010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:03:09.572434 systemd-logind[1516]: New session 21 of user core. Oct 30 00:03:09.576396 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 30 00:03:09.735330 sshd[5013]: Connection closed by 139.178.89.65 port 51462 Oct 30 00:03:09.734781 sshd-session[5010]: pam_unix(sshd:session): session closed for user core Oct 30 00:03:09.740638 systemd-logind[1516]: Session 21 logged out. Waiting for processes to exit. Oct 30 00:03:09.740980 systemd[1]: sshd@20-143.198.78.203:22-139.178.89.65:51462.service: Deactivated successfully. Oct 30 00:03:09.743546 systemd[1]: session-21.scope: Deactivated successfully. Oct 30 00:03:09.747554 systemd-logind[1516]: Removed session 21. Oct 30 00:03:10.420103 kubelet[2706]: E1030 00:03:10.419908 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:03:12.432835 kubelet[2706]: E1030 00:03:12.432023 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4nhcg" podUID="640bc622-db68-4e4b-a017-7a6f5994fc43" Oct 30 00:03:13.979608 containerd[1550]: time="2025-10-30T00:03:13.979552987Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bfd0a574ba0aa16712469af1cfc78a19606dfd59786bc095c4a04b304744f7f2\" id:\"f836fda31a653b4b12ff6adbc6c5e5121ae3c6ffa0b28655a16f4f847e41aada\" pid:5036 exited_at:{seconds:1761782593 nanos:978987961}" Oct 30 00:03:14.423374 kubelet[2706]: E1030 00:03:14.422975 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8cb7cb6c6-ml8rk" podUID="9a28bd97-c5f6-423b-9db0-587ee8384ffe" Oct 30 00:03:14.424959 kubelet[2706]: E1030 00:03:14.424599 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74894d766f-zrklx" podUID="3d001f0c-0d5b-4f3a-ac85-24b0d4c18012" Oct 30 00:03:14.753377 systemd[1]: Started sshd@21-143.198.78.203:22-139.178.89.65:51472.service - OpenSSH per-connection server daemon (139.178.89.65:51472). Oct 30 00:03:14.825295 sshd[5048]: Accepted publickey for core from 139.178.89.65 port 51472 ssh2: RSA SHA256:R36h6avakroD4W10ylGeMiic55sH3vtiJobaKN4s5do Oct 30 00:03:14.827278 sshd-session[5048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:03:14.833985 systemd-logind[1516]: New session 22 of user core. Oct 30 00:03:14.841353 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 30 00:03:14.985214 sshd[5051]: Connection closed by 139.178.89.65 port 51472 Oct 30 00:03:14.985632 sshd-session[5048]: pam_unix(sshd:session): session closed for user core Oct 30 00:03:14.992968 systemd-logind[1516]: Session 22 logged out. Waiting for processes to exit. Oct 30 00:03:14.993561 systemd[1]: sshd@21-143.198.78.203:22-139.178.89.65:51472.service: Deactivated successfully. Oct 30 00:03:14.996483 systemd[1]: session-22.scope: Deactivated successfully. Oct 30 00:03:14.999405 systemd-logind[1516]: Removed session 22. Oct 30 00:03:15.422936 kubelet[2706]: E1030 00:03:15.422816 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74894d766f-gm29g" podUID="040b195d-6ed8-46f6-9e09-7aab95a4cc1d" Oct 30 00:03:15.423911 kubelet[2706]: E1030 00:03:15.423855 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5kjbz" podUID="7c170e9a-cada-41cd-bd7c-14ab708f01d4" Oct 30 00:03:19.422190 kubelet[2706]: E1030 00:03:19.421838 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b8bf69cc7-2jwmd" podUID="1d51aa8e-d9c7-44fd-8bd6-4ee88452e454" Oct 30 00:03:20.005962 systemd[1]: Started sshd@22-143.198.78.203:22-139.178.89.65:39734.service - OpenSSH per-connection server daemon (139.178.89.65:39734). Oct 30 00:03:20.081118 sshd[5065]: Accepted publickey for core from 139.178.89.65 port 39734 ssh2: RSA SHA256:R36h6avakroD4W10ylGeMiic55sH3vtiJobaKN4s5do Oct 30 00:03:20.082795 sshd-session[5065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:03:20.091653 systemd-logind[1516]: New session 23 of user core. Oct 30 00:03:20.096585 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 30 00:03:20.251857 sshd[5068]: Connection closed by 139.178.89.65 port 39734 Oct 30 00:03:20.252515 sshd-session[5065]: pam_unix(sshd:session): session closed for user core Oct 30 00:03:20.260739 systemd[1]: sshd@22-143.198.78.203:22-139.178.89.65:39734.service: Deactivated successfully. Oct 30 00:03:20.266630 systemd[1]: session-23.scope: Deactivated successfully. Oct 30 00:03:20.270168 systemd-logind[1516]: Session 23 logged out. Waiting for processes to exit. Oct 30 00:03:20.275931 systemd-logind[1516]: Removed session 23. Oct 30 00:03:24.423706 kubelet[2706]: E1030 00:03:24.422751 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4nhcg" podUID="640bc622-db68-4e4b-a017-7a6f5994fc43" Oct 30 00:03:25.271464 systemd[1]: Started sshd@23-143.198.78.203:22-139.178.89.65:39742.service - OpenSSH per-connection server daemon (139.178.89.65:39742). Oct 30 00:03:25.379116 sshd[5081]: Accepted publickey for core from 139.178.89.65 port 39742 ssh2: RSA SHA256:R36h6avakroD4W10ylGeMiic55sH3vtiJobaKN4s5do Oct 30 00:03:25.382465 sshd-session[5081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:03:25.396241 systemd-logind[1516]: New session 24 of user core. Oct 30 00:03:25.401372 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 30 00:03:25.603028 sshd[5084]: Connection closed by 139.178.89.65 port 39742 Oct 30 00:03:25.602320 sshd-session[5081]: pam_unix(sshd:session): session closed for user core Oct 30 00:03:25.610734 systemd-logind[1516]: Session 24 logged out. Waiting for processes to exit. Oct 30 00:03:25.613557 systemd[1]: sshd@23-143.198.78.203:22-139.178.89.65:39742.service: Deactivated successfully. Oct 30 00:03:25.621506 systemd[1]: session-24.scope: Deactivated successfully. Oct 30 00:03:25.624187 systemd-logind[1516]: Removed session 24. Oct 30 00:03:26.422875 kubelet[2706]: E1030 00:03:26.422304 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74894d766f-zrklx" podUID="3d001f0c-0d5b-4f3a-ac85-24b0d4c18012" Oct 30 00:03:27.421742 kubelet[2706]: E1030 00:03:27.421617 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 30 00:03:27.421742 kubelet[2706]: E1030 00:03:27.421724 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74894d766f-gm29g" podUID="040b195d-6ed8-46f6-9e09-7aab95a4cc1d" Oct 30 00:03:27.422762 kubelet[2706]: E1030 00:03:27.422719 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8cb7cb6c6-ml8rk" podUID="9a28bd97-c5f6-423b-9db0-587ee8384ffe"