Sep 16 04:53:03.843691 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Sep 16 03:05:42 -00 2025 Sep 16 04:53:03.843712 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=0b876f86a632750e9937176808a48c2452d5168964273bcfc3c72f2a26140c06 Sep 16 04:53:03.843720 kernel: BIOS-provided physical RAM map: Sep 16 04:53:03.843725 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 16 04:53:03.843730 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 16 04:53:03.843734 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 16 04:53:03.843752 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Sep 16 04:53:03.843757 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Sep 16 04:53:03.843762 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 16 04:53:03.843767 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 16 04:53:03.843772 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 16 04:53:03.843777 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 16 04:53:03.843782 kernel: NX (Execute Disable) protection: active Sep 16 04:53:03.843787 kernel: APIC: Static calls initialized Sep 16 04:53:03.843794 kernel: SMBIOS 2.8 present. Sep 16 04:53:03.843800 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Sep 16 04:53:03.843805 kernel: DMI: Memory slots populated: 1/1 Sep 16 04:53:03.843810 kernel: Hypervisor detected: KVM Sep 16 04:53:03.843815 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 16 04:53:03.843820 kernel: kvm-clock: using sched offset of 4107102647 cycles Sep 16 04:53:03.843826 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 16 04:53:03.843832 kernel: tsc: Detected 2445.406 MHz processor Sep 16 04:53:03.843839 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 16 04:53:03.843845 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 16 04:53:03.843850 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Sep 16 04:53:03.843856 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 16 04:53:03.843861 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 16 04:53:03.843867 kernel: Using GB pages for direct mapping Sep 16 04:53:03.843872 kernel: ACPI: Early table checksum verification disabled Sep 16 04:53:03.843877 kernel: ACPI: RSDP 0x00000000000F5270 000014 (v00 BOCHS ) Sep 16 04:53:03.843883 kernel: ACPI: RSDT 0x000000007CFE2693 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:53:03.843889 kernel: ACPI: FACP 0x000000007CFE2483 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:53:03.843895 kernel: ACPI: DSDT 0x000000007CFE0040 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:53:03.843900 kernel: ACPI: FACS 0x000000007CFE0000 000040 Sep 16 04:53:03.843906 kernel: ACPI: APIC 0x000000007CFE2577 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:53:03.843911 kernel: ACPI: HPET 0x000000007CFE25F7 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:53:03.843916 kernel: ACPI: MCFG 0x000000007CFE262F 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:53:03.843922 kernel: ACPI: WAET 0x000000007CFE266B 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:53:03.843927 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe2483-0x7cfe2576] Sep 16 04:53:03.843934 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe2482] Sep 16 04:53:03.843941 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Sep 16 04:53:03.843947 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2577-0x7cfe25f6] Sep 16 04:53:03.843952 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25f7-0x7cfe262e] Sep 16 04:53:03.843958 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe262f-0x7cfe266a] Sep 16 04:53:03.843964 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe266b-0x7cfe2692] Sep 16 04:53:03.843971 kernel: No NUMA configuration found Sep 16 04:53:03.843976 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Sep 16 04:53:03.843982 kernel: NODE_DATA(0) allocated [mem 0x7cfd4dc0-0x7cfdbfff] Sep 16 04:53:03.843988 kernel: Zone ranges: Sep 16 04:53:03.843994 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 16 04:53:03.843999 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Sep 16 04:53:03.844005 kernel: Normal empty Sep 16 04:53:03.844010 kernel: Device empty Sep 16 04:53:03.844016 kernel: Movable zone start for each node Sep 16 04:53:03.844021 kernel: Early memory node ranges Sep 16 04:53:03.844028 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 16 04:53:03.844034 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Sep 16 04:53:03.844039 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Sep 16 04:53:03.844045 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 16 04:53:03.844050 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 16 04:53:03.844056 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 16 04:53:03.844062 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 16 04:53:03.844068 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 16 04:53:03.844073 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 16 04:53:03.844080 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 16 04:53:03.844086 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 16 04:53:03.844092 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 16 04:53:03.844097 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 16 04:53:03.844103 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 16 04:53:03.844108 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 16 04:53:03.844114 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 16 04:53:03.844119 kernel: CPU topo: Max. logical packages: 1 Sep 16 04:53:03.844125 kernel: CPU topo: Max. logical dies: 1 Sep 16 04:53:03.844132 kernel: CPU topo: Max. dies per package: 1 Sep 16 04:53:03.844137 kernel: CPU topo: Max. threads per core: 1 Sep 16 04:53:03.844143 kernel: CPU topo: Num. cores per package: 2 Sep 16 04:53:03.844148 kernel: CPU topo: Num. threads per package: 2 Sep 16 04:53:03.844154 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Sep 16 04:53:03.844159 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 16 04:53:03.844165 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 16 04:53:03.844171 kernel: Booting paravirtualized kernel on KVM Sep 16 04:53:03.844177 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 16 04:53:03.844184 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 16 04:53:03.844192 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Sep 16 04:53:03.844202 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Sep 16 04:53:03.844212 kernel: pcpu-alloc: [0] 0 1 Sep 16 04:53:03.844222 kernel: kvm-guest: PV spinlocks disabled, no host support Sep 16 04:53:03.844234 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=0b876f86a632750e9937176808a48c2452d5168964273bcfc3c72f2a26140c06 Sep 16 04:53:03.844245 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 16 04:53:03.844252 kernel: random: crng init done Sep 16 04:53:03.844259 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 16 04:53:03.844267 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 16 04:53:03.844273 kernel: Fallback order for Node 0: 0 Sep 16 04:53:03.844279 kernel: Built 1 zonelists, mobility grouping on. Total pages: 511866 Sep 16 04:53:03.844284 kernel: Policy zone: DMA32 Sep 16 04:53:03.844291 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 16 04:53:03.844296 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 16 04:53:03.844302 kernel: ftrace: allocating 40125 entries in 157 pages Sep 16 04:53:03.844308 kernel: ftrace: allocated 157 pages with 5 groups Sep 16 04:53:03.844314 kernel: Dynamic Preempt: voluntary Sep 16 04:53:03.844322 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 16 04:53:03.844328 kernel: rcu: RCU event tracing is enabled. Sep 16 04:53:03.844334 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 16 04:53:03.844340 kernel: Trampoline variant of Tasks RCU enabled. Sep 16 04:53:03.844346 kernel: Rude variant of Tasks RCU enabled. Sep 16 04:53:03.844352 kernel: Tracing variant of Tasks RCU enabled. Sep 16 04:53:03.844358 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 16 04:53:03.844363 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 16 04:53:03.844369 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 16 04:53:03.844376 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 16 04:53:03.844382 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 16 04:53:03.844388 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 16 04:53:03.844393 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 16 04:53:03.844399 kernel: Console: colour VGA+ 80x25 Sep 16 04:53:03.844404 kernel: printk: legacy console [tty0] enabled Sep 16 04:53:03.844410 kernel: printk: legacy console [ttyS0] enabled Sep 16 04:53:03.844416 kernel: ACPI: Core revision 20240827 Sep 16 04:53:03.844422 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 16 04:53:03.844432 kernel: APIC: Switch to symmetric I/O mode setup Sep 16 04:53:03.844439 kernel: x2apic enabled Sep 16 04:53:03.844445 kernel: APIC: Switched APIC routing to: physical x2apic Sep 16 04:53:03.844452 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 16 04:53:03.844458 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fc4eb620, max_idle_ns: 440795316590 ns Sep 16 04:53:03.844479 kernel: Calibrating delay loop (skipped) preset value.. 4890.81 BogoMIPS (lpj=2445406) Sep 16 04:53:03.844486 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 16 04:53:03.844492 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 16 04:53:03.844498 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 16 04:53:03.844506 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 16 04:53:03.844512 kernel: Spectre V2 : Mitigation: Retpolines Sep 16 04:53:03.844518 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 16 04:53:03.844524 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 16 04:53:03.844530 kernel: active return thunk: retbleed_return_thunk Sep 16 04:53:03.844536 kernel: RETBleed: Mitigation: untrained return thunk Sep 16 04:53:03.844542 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 16 04:53:03.844550 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 16 04:53:03.844556 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 16 04:53:03.844562 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 16 04:53:03.844568 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 16 04:53:03.844574 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 16 04:53:03.844580 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 16 04:53:03.844586 kernel: Freeing SMP alternatives memory: 32K Sep 16 04:53:03.844592 kernel: pid_max: default: 32768 minimum: 301 Sep 16 04:53:03.844598 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 16 04:53:03.844605 kernel: landlock: Up and running. Sep 16 04:53:03.844611 kernel: SELinux: Initializing. Sep 16 04:53:03.844617 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 16 04:53:03.844623 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 16 04:53:03.844629 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 16 04:53:03.844635 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 16 04:53:03.844641 kernel: ... version: 0 Sep 16 04:53:03.844647 kernel: ... bit width: 48 Sep 16 04:53:03.844653 kernel: ... generic registers: 6 Sep 16 04:53:03.844661 kernel: ... value mask: 0000ffffffffffff Sep 16 04:53:03.844667 kernel: ... max period: 00007fffffffffff Sep 16 04:53:03.844673 kernel: ... fixed-purpose events: 0 Sep 16 04:53:03.844678 kernel: ... event mask: 000000000000003f Sep 16 04:53:03.844684 kernel: signal: max sigframe size: 1776 Sep 16 04:53:03.844690 kernel: rcu: Hierarchical SRCU implementation. Sep 16 04:53:03.844696 kernel: rcu: Max phase no-delay instances is 400. Sep 16 04:53:03.844702 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 16 04:53:03.844708 kernel: smp: Bringing up secondary CPUs ... Sep 16 04:53:03.844715 kernel: smpboot: x86: Booting SMP configuration: Sep 16 04:53:03.844721 kernel: .... node #0, CPUs: #1 Sep 16 04:53:03.844727 kernel: smp: Brought up 1 node, 2 CPUs Sep 16 04:53:03.844733 kernel: smpboot: Total of 2 processors activated (9781.62 BogoMIPS) Sep 16 04:53:03.844747 kernel: Memory: 1917788K/2047464K available (14336K kernel code, 2432K rwdata, 9992K rodata, 54096K init, 2868K bss, 125140K reserved, 0K cma-reserved) Sep 16 04:53:03.844753 kernel: devtmpfs: initialized Sep 16 04:53:03.844759 kernel: x86/mm: Memory block size: 128MB Sep 16 04:53:03.844765 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 16 04:53:03.844771 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 16 04:53:03.844779 kernel: pinctrl core: initialized pinctrl subsystem Sep 16 04:53:03.844785 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 16 04:53:03.844791 kernel: audit: initializing netlink subsys (disabled) Sep 16 04:53:03.844797 kernel: audit: type=2000 audit(1757998381.221:1): state=initialized audit_enabled=0 res=1 Sep 16 04:53:03.844803 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 16 04:53:03.844809 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 16 04:53:03.844815 kernel: cpuidle: using governor menu Sep 16 04:53:03.844821 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 16 04:53:03.844827 kernel: dca service started, version 1.12.1 Sep 16 04:53:03.844834 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Sep 16 04:53:03.844840 kernel: PCI: Using configuration type 1 for base access Sep 16 04:53:03.844847 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 16 04:53:03.844853 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 16 04:53:03.844859 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 16 04:53:03.844865 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 16 04:53:03.844871 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 16 04:53:03.844877 kernel: ACPI: Added _OSI(Module Device) Sep 16 04:53:03.844883 kernel: ACPI: Added _OSI(Processor Device) Sep 16 04:53:03.844890 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 16 04:53:03.844896 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 16 04:53:03.844901 kernel: ACPI: Interpreter enabled Sep 16 04:53:03.844907 kernel: ACPI: PM: (supports S0 S5) Sep 16 04:53:03.844913 kernel: ACPI: Using IOAPIC for interrupt routing Sep 16 04:53:03.844919 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 16 04:53:03.844926 kernel: PCI: Using E820 reservations for host bridge windows Sep 16 04:53:03.844932 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 16 04:53:03.844938 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 16 04:53:03.845054 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 16 04:53:03.845125 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 16 04:53:03.845185 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 16 04:53:03.845195 kernel: PCI host bridge to bus 0000:00 Sep 16 04:53:03.845262 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 16 04:53:03.845318 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 16 04:53:03.845376 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 16 04:53:03.845428 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Sep 16 04:53:03.845500 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 16 04:53:03.845557 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 16 04:53:03.845609 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 16 04:53:03.845687 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 16 04:53:03.845777 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Sep 16 04:53:03.845847 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfb800000-0xfbffffff pref] Sep 16 04:53:03.845907 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfd200000-0xfd203fff 64bit pref] Sep 16 04:53:03.845967 kernel: pci 0000:00:01.0: BAR 4 [mem 0xfea10000-0xfea10fff] Sep 16 04:53:03.846026 kernel: pci 0000:00:01.0: ROM [mem 0xfea00000-0xfea0ffff pref] Sep 16 04:53:03.846085 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 16 04:53:03.846155 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 16 04:53:03.846219 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea11000-0xfea11fff] Sep 16 04:53:03.846279 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Sep 16 04:53:03.846338 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Sep 16 04:53:03.846397 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Sep 16 04:53:03.846514 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 16 04:53:03.846596 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea12000-0xfea12fff] Sep 16 04:53:03.846695 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Sep 16 04:53:03.846800 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Sep 16 04:53:03.846888 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Sep 16 04:53:03.846974 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 16 04:53:03.847078 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea13000-0xfea13fff] Sep 16 04:53:03.847148 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Sep 16 04:53:03.847208 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Sep 16 04:53:03.847266 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 16 04:53:03.847332 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 16 04:53:03.847397 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea14000-0xfea14fff] Sep 16 04:53:03.847456 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Sep 16 04:53:03.847585 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Sep 16 04:53:03.847690 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 16 04:53:03.847817 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 16 04:53:03.847927 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea15000-0xfea15fff] Sep 16 04:53:03.847991 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Sep 16 04:53:03.848057 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Sep 16 04:53:03.848116 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 16 04:53:03.848184 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 16 04:53:03.848245 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea16000-0xfea16fff] Sep 16 04:53:03.848304 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Sep 16 04:53:03.848361 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Sep 16 04:53:03.848419 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 16 04:53:03.848518 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 16 04:53:03.848631 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea17000-0xfea17fff] Sep 16 04:53:03.848697 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Sep 16 04:53:03.848770 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Sep 16 04:53:03.848830 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 16 04:53:03.848896 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 16 04:53:03.848961 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea18000-0xfea18fff] Sep 16 04:53:03.849019 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Sep 16 04:53:03.849076 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Sep 16 04:53:03.849135 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 16 04:53:03.849200 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 16 04:53:03.849260 kernel: pci 0000:00:03.0: BAR 0 [mem 0xfea19000-0xfea19fff] Sep 16 04:53:03.849318 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Sep 16 04:53:03.849380 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Sep 16 04:53:03.849438 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 16 04:53:03.849614 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 16 04:53:03.849702 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 16 04:53:03.849784 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 16 04:53:03.849845 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc040-0xc05f] Sep 16 04:53:03.849903 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea1a000-0xfea1afff] Sep 16 04:53:03.849976 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 16 04:53:03.850035 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Sep 16 04:53:03.850107 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Sep 16 04:53:03.850196 kernel: pci 0000:01:00.0: BAR 1 [mem 0xfe880000-0xfe880fff] Sep 16 04:53:03.850262 kernel: pci 0000:01:00.0: BAR 4 [mem 0xfd000000-0xfd003fff 64bit pref] Sep 16 04:53:03.850337 kernel: pci 0000:01:00.0: ROM [mem 0xfe800000-0xfe87ffff pref] Sep 16 04:53:03.850407 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Sep 16 04:53:03.850516 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Sep 16 04:53:03.850585 kernel: pci 0000:02:00.0: BAR 0 [mem 0xfe600000-0xfe603fff 64bit] Sep 16 04:53:03.850645 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Sep 16 04:53:03.850714 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 PCIe Endpoint Sep 16 04:53:03.850793 kernel: pci 0000:03:00.0: BAR 1 [mem 0xfe400000-0xfe400fff] Sep 16 04:53:03.850855 kernel: pci 0000:03:00.0: BAR 4 [mem 0xfcc00000-0xfcc03fff 64bit pref] Sep 16 04:53:03.850919 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Sep 16 04:53:03.850990 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Sep 16 04:53:03.851053 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfca00000-0xfca03fff 64bit pref] Sep 16 04:53:03.851113 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Sep 16 04:53:03.851182 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Sep 16 04:53:03.851247 kernel: pci 0000:05:00.0: BAR 4 [mem 0xfc800000-0xfc803fff 64bit pref] Sep 16 04:53:03.851307 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Sep 16 04:53:03.851401 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 PCIe Endpoint Sep 16 04:53:03.851491 kernel: pci 0000:06:00.0: BAR 1 [mem 0xfde00000-0xfde00fff] Sep 16 04:53:03.851584 kernel: pci 0000:06:00.0: BAR 4 [mem 0xfc600000-0xfc603fff 64bit pref] Sep 16 04:53:03.851646 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Sep 16 04:53:03.851656 kernel: acpiphp: Slot [0] registered Sep 16 04:53:03.853577 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Sep 16 04:53:03.853682 kernel: pci 0000:07:00.0: BAR 1 [mem 0xfdc80000-0xfdc80fff] Sep 16 04:53:03.853774 kernel: pci 0000:07:00.0: BAR 4 [mem 0xfc400000-0xfc403fff 64bit pref] Sep 16 04:53:03.853884 kernel: pci 0000:07:00.0: ROM [mem 0xfdc00000-0xfdc7ffff pref] Sep 16 04:53:03.853954 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Sep 16 04:53:03.853964 kernel: acpiphp: Slot [0-2] registered Sep 16 04:53:03.854023 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Sep 16 04:53:03.854034 kernel: acpiphp: Slot [0-3] registered Sep 16 04:53:03.854092 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Sep 16 04:53:03.854105 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 16 04:53:03.854111 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 16 04:53:03.854117 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 16 04:53:03.854123 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 16 04:53:03.854129 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 16 04:53:03.854136 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 16 04:53:03.854142 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 16 04:53:03.854148 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 16 04:53:03.854154 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 16 04:53:03.854162 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 16 04:53:03.854168 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 16 04:53:03.854174 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 16 04:53:03.854179 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 16 04:53:03.854186 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 16 04:53:03.854192 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 16 04:53:03.854198 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 16 04:53:03.854204 kernel: iommu: Default domain type: Translated Sep 16 04:53:03.854210 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 16 04:53:03.854217 kernel: PCI: Using ACPI for IRQ routing Sep 16 04:53:03.854224 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 16 04:53:03.854230 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 16 04:53:03.854237 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Sep 16 04:53:03.854300 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 16 04:53:03.854360 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 16 04:53:03.854419 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 16 04:53:03.854427 kernel: vgaarb: loaded Sep 16 04:53:03.854436 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 16 04:53:03.854442 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 16 04:53:03.854448 kernel: clocksource: Switched to clocksource kvm-clock Sep 16 04:53:03.854455 kernel: VFS: Disk quotas dquot_6.6.0 Sep 16 04:53:03.854461 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 16 04:53:03.854490 kernel: pnp: PnP ACPI init Sep 16 04:53:03.854570 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 16 04:53:03.854580 kernel: pnp: PnP ACPI: found 5 devices Sep 16 04:53:03.854587 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 16 04:53:03.854596 kernel: NET: Registered PF_INET protocol family Sep 16 04:53:03.854602 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 16 04:53:03.854611 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 16 04:53:03.854622 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 16 04:53:03.854633 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 16 04:53:03.854640 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 16 04:53:03.854646 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 16 04:53:03.854652 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 16 04:53:03.854660 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 16 04:53:03.854683 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 16 04:53:03.854717 kernel: NET: Registered PF_XDP protocol family Sep 16 04:53:03.854854 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Sep 16 04:53:03.854968 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Sep 16 04:53:03.855069 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Sep 16 04:53:03.855137 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff]: assigned Sep 16 04:53:03.855199 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff]: assigned Sep 16 04:53:03.855259 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff]: assigned Sep 16 04:53:03.855324 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Sep 16 04:53:03.855407 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Sep 16 04:53:03.856098 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Sep 16 04:53:03.856205 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Sep 16 04:53:03.856275 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Sep 16 04:53:03.856339 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Sep 16 04:53:03.856404 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Sep 16 04:53:03.856493 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Sep 16 04:53:03.856560 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 16 04:53:03.856634 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Sep 16 04:53:03.856696 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Sep 16 04:53:03.856775 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 16 04:53:03.856841 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Sep 16 04:53:03.856902 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Sep 16 04:53:03.856967 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 16 04:53:03.857033 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Sep 16 04:53:03.857094 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Sep 16 04:53:03.857155 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 16 04:53:03.857218 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Sep 16 04:53:03.857279 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Sep 16 04:53:03.857343 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Sep 16 04:53:03.857409 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 16 04:53:03.859535 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Sep 16 04:53:03.859674 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Sep 16 04:53:03.859808 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Sep 16 04:53:03.859930 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 16 04:53:03.860043 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Sep 16 04:53:03.860160 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Sep 16 04:53:03.860261 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Sep 16 04:53:03.860368 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 16 04:53:03.862482 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 16 04:53:03.862569 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 16 04:53:03.862635 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 16 04:53:03.862691 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Sep 16 04:53:03.862761 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 16 04:53:03.862817 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 16 04:53:03.862886 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Sep 16 04:53:03.862943 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Sep 16 04:53:03.863012 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Sep 16 04:53:03.863068 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Sep 16 04:53:03.863130 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Sep 16 04:53:03.863185 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 16 04:53:03.863253 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Sep 16 04:53:03.863308 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 16 04:53:03.863375 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Sep 16 04:53:03.863430 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 16 04:53:03.864570 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Sep 16 04:53:03.864648 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 16 04:53:03.864712 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Sep 16 04:53:03.864791 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Sep 16 04:53:03.864853 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 16 04:53:03.864919 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Sep 16 04:53:03.864976 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Sep 16 04:53:03.865031 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 16 04:53:03.865093 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Sep 16 04:53:03.865149 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Sep 16 04:53:03.865203 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 16 04:53:03.865215 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 16 04:53:03.865222 kernel: PCI: CLS 0 bytes, default 64 Sep 16 04:53:03.865229 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fc4eb620, max_idle_ns: 440795316590 ns Sep 16 04:53:03.865236 kernel: Initialise system trusted keyrings Sep 16 04:53:03.865243 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 16 04:53:03.865249 kernel: Key type asymmetric registered Sep 16 04:53:03.865255 kernel: Asymmetric key parser 'x509' registered Sep 16 04:53:03.865262 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 16 04:53:03.865270 kernel: io scheduler mq-deadline registered Sep 16 04:53:03.865276 kernel: io scheduler kyber registered Sep 16 04:53:03.865283 kernel: io scheduler bfq registered Sep 16 04:53:03.865352 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Sep 16 04:53:03.865417 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Sep 16 04:53:03.866545 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Sep 16 04:53:03.866634 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Sep 16 04:53:03.866702 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Sep 16 04:53:03.866781 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Sep 16 04:53:03.866853 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Sep 16 04:53:03.866914 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Sep 16 04:53:03.866976 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Sep 16 04:53:03.867036 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Sep 16 04:53:03.867099 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Sep 16 04:53:03.867160 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Sep 16 04:53:03.867223 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Sep 16 04:53:03.867284 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Sep 16 04:53:03.867351 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Sep 16 04:53:03.867422 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Sep 16 04:53:03.867446 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 16 04:53:03.868585 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Sep 16 04:53:03.868662 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Sep 16 04:53:03.868675 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 16 04:53:03.868687 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Sep 16 04:53:03.868694 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 16 04:53:03.868700 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 16 04:53:03.868707 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 16 04:53:03.868714 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 16 04:53:03.868720 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 16 04:53:03.868727 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 16 04:53:03.868821 kernel: rtc_cmos 00:03: RTC can wake from S4 Sep 16 04:53:03.868885 kernel: rtc_cmos 00:03: registered as rtc0 Sep 16 04:53:03.868941 kernel: rtc_cmos 00:03: setting system clock to 2025-09-16T04:53:03 UTC (1757998383) Sep 16 04:53:03.868996 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 16 04:53:03.869005 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 16 04:53:03.869013 kernel: NET: Registered PF_INET6 protocol family Sep 16 04:53:03.869019 kernel: Segment Routing with IPv6 Sep 16 04:53:03.869026 kernel: In-situ OAM (IOAM) with IPv6 Sep 16 04:53:03.869033 kernel: NET: Registered PF_PACKET protocol family Sep 16 04:53:03.869042 kernel: Key type dns_resolver registered Sep 16 04:53:03.869049 kernel: IPI shorthand broadcast: enabled Sep 16 04:53:03.869055 kernel: sched_clock: Marking stable (2937012522, 146177172)->(3090301109, -7111415) Sep 16 04:53:03.869062 kernel: registered taskstats version 1 Sep 16 04:53:03.869068 kernel: Loading compiled-in X.509 certificates Sep 16 04:53:03.869075 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: d1d5b0d56b9b23dabf19e645632ff93bf659b3bf' Sep 16 04:53:03.869081 kernel: Demotion targets for Node 0: null Sep 16 04:53:03.869088 kernel: Key type .fscrypt registered Sep 16 04:53:03.869094 kernel: Key type fscrypt-provisioning registered Sep 16 04:53:03.869102 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 16 04:53:03.869109 kernel: ima: Allocated hash algorithm: sha1 Sep 16 04:53:03.869115 kernel: ima: No architecture policies found Sep 16 04:53:03.869121 kernel: clk: Disabling unused clocks Sep 16 04:53:03.869128 kernel: Warning: unable to open an initial console. Sep 16 04:53:03.869134 kernel: Freeing unused kernel image (initmem) memory: 54096K Sep 16 04:53:03.869141 kernel: Write protecting the kernel read-only data: 24576k Sep 16 04:53:03.869147 kernel: Freeing unused kernel image (rodata/data gap) memory: 248K Sep 16 04:53:03.869154 kernel: Run /init as init process Sep 16 04:53:03.869161 kernel: with arguments: Sep 16 04:53:03.869169 kernel: /init Sep 16 04:53:03.869175 kernel: with environment: Sep 16 04:53:03.869181 kernel: HOME=/ Sep 16 04:53:03.869187 kernel: TERM=linux Sep 16 04:53:03.869194 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 16 04:53:03.869202 systemd[1]: Successfully made /usr/ read-only. Sep 16 04:53:03.869212 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 16 04:53:03.869221 systemd[1]: Detected virtualization kvm. Sep 16 04:53:03.869228 systemd[1]: Detected architecture x86-64. Sep 16 04:53:03.869234 systemd[1]: Running in initrd. Sep 16 04:53:03.869241 systemd[1]: No hostname configured, using default hostname. Sep 16 04:53:03.869248 systemd[1]: Hostname set to . Sep 16 04:53:03.869254 systemd[1]: Initializing machine ID from VM UUID. Sep 16 04:53:03.869261 systemd[1]: Queued start job for default target initrd.target. Sep 16 04:53:03.869268 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 04:53:03.869276 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 04:53:03.869283 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 16 04:53:03.869290 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 16 04:53:03.869297 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 16 04:53:03.869304 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 16 04:53:03.869312 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 16 04:53:03.869319 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 16 04:53:03.869327 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 04:53:03.869334 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 16 04:53:03.869340 systemd[1]: Reached target paths.target - Path Units. Sep 16 04:53:03.869347 systemd[1]: Reached target slices.target - Slice Units. Sep 16 04:53:03.869354 systemd[1]: Reached target swap.target - Swaps. Sep 16 04:53:03.869361 systemd[1]: Reached target timers.target - Timer Units. Sep 16 04:53:03.869369 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 16 04:53:03.869376 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 16 04:53:03.869384 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 16 04:53:03.869390 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 16 04:53:03.869397 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 16 04:53:03.869404 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 16 04:53:03.869411 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 04:53:03.869418 systemd[1]: Reached target sockets.target - Socket Units. Sep 16 04:53:03.869424 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 16 04:53:03.869431 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 16 04:53:03.869438 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 16 04:53:03.869446 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 16 04:53:03.869453 systemd[1]: Starting systemd-fsck-usr.service... Sep 16 04:53:03.869460 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 16 04:53:03.873260 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 16 04:53:03.873270 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:53:03.873278 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 16 04:53:03.873293 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 04:53:03.873300 systemd[1]: Finished systemd-fsck-usr.service. Sep 16 04:53:03.873348 systemd-journald[216]: Collecting audit messages is disabled. Sep 16 04:53:03.873372 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 16 04:53:03.873381 systemd-journald[216]: Journal started Sep 16 04:53:03.873399 systemd-journald[216]: Runtime Journal (/run/log/journal/9e944a3a22fc401ebd0769b2f37cf7a1) is 4.8M, max 38.6M, 33.7M free. Sep 16 04:53:03.853677 systemd-modules-load[217]: Inserted module 'overlay' Sep 16 04:53:03.908347 systemd[1]: Started systemd-journald.service - Journal Service. Sep 16 04:53:03.908379 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 16 04:53:03.908390 kernel: Bridge firewalling registered Sep 16 04:53:03.882270 systemd-modules-load[217]: Inserted module 'br_netfilter' Sep 16 04:53:03.907401 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 16 04:53:03.908036 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:53:03.908903 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 16 04:53:03.911560 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 16 04:53:03.913908 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 16 04:53:03.919968 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 16 04:53:03.926363 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 16 04:53:03.933120 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:53:03.936149 systemd-tmpfiles[234]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 16 04:53:03.939963 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 04:53:03.941219 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 04:53:03.944142 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 16 04:53:03.945334 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 16 04:53:03.947569 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 16 04:53:03.961891 dracut-cmdline[255]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=0b876f86a632750e9937176808a48c2452d5168964273bcfc3c72f2a26140c06 Sep 16 04:53:03.977970 systemd-resolved[254]: Positive Trust Anchors: Sep 16 04:53:03.977986 systemd-resolved[254]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 16 04:53:03.978010 systemd-resolved[254]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 16 04:53:03.983405 systemd-resolved[254]: Defaulting to hostname 'linux'. Sep 16 04:53:03.984174 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 16 04:53:03.985245 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 16 04:53:04.039503 kernel: SCSI subsystem initialized Sep 16 04:53:04.049504 kernel: Loading iSCSI transport class v2.0-870. Sep 16 04:53:04.060508 kernel: iscsi: registered transport (tcp) Sep 16 04:53:04.081797 kernel: iscsi: registered transport (qla4xxx) Sep 16 04:53:04.081864 kernel: QLogic iSCSI HBA Driver Sep 16 04:53:04.100079 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 16 04:53:04.118580 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 04:53:04.119864 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 16 04:53:04.171824 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 16 04:53:04.175340 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 16 04:53:04.226523 kernel: raid6: avx2x4 gen() 29496 MB/s Sep 16 04:53:04.243514 kernel: raid6: avx2x2 gen() 34996 MB/s Sep 16 04:53:04.260626 kernel: raid6: avx2x1 gen() 22621 MB/s Sep 16 04:53:04.260709 kernel: raid6: using algorithm avx2x2 gen() 34996 MB/s Sep 16 04:53:04.279511 kernel: raid6: .... xor() 31523 MB/s, rmw enabled Sep 16 04:53:04.279582 kernel: raid6: using avx2x2 recovery algorithm Sep 16 04:53:04.300531 kernel: xor: automatically using best checksumming function avx Sep 16 04:53:04.424519 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 16 04:53:04.430019 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 16 04:53:04.432912 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 04:53:04.458522 systemd-udevd[464]: Using default interface naming scheme 'v255'. Sep 16 04:53:04.462280 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 04:53:04.465085 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 16 04:53:04.489948 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation Sep 16 04:53:04.512969 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 16 04:53:04.514894 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 16 04:53:04.585885 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 04:53:04.588978 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 16 04:53:04.695503 kernel: ACPI: bus type USB registered Sep 16 04:53:04.700516 kernel: cryptd: max_cpu_qlen set to 1000 Sep 16 04:53:04.702495 kernel: usbcore: registered new interface driver usbfs Sep 16 04:53:04.707504 kernel: virtio_scsi virtio5: 2/0/0 default/read/poll queues Sep 16 04:53:04.718027 kernel: usbcore: registered new interface driver hub Sep 16 04:53:04.718078 kernel: usbcore: registered new device driver usb Sep 16 04:53:04.723491 kernel: scsi host0: Virtio SCSI HBA Sep 16 04:53:04.728498 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 16 04:53:04.736489 kernel: AES CTR mode by8 optimization enabled Sep 16 04:53:04.739351 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Sep 16 04:53:04.749944 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 04:53:04.751679 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:53:04.752673 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:53:04.758364 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:53:04.773506 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Sep 16 04:53:04.777593 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Sep 16 04:53:04.782522 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Sep 16 04:53:04.782772 kernel: sd 0:0:0:0: Power-on or device reset occurred Sep 16 04:53:04.785496 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Sep 16 04:53:04.787722 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 16 04:53:04.787883 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Sep 16 04:53:04.787997 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Sep 16 04:53:04.788089 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Sep 16 04:53:04.791528 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 16 04:53:04.792501 kernel: libata version 3.00 loaded. Sep 16 04:53:04.794502 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Sep 16 04:53:04.794855 kernel: hub 1-0:1.0: USB hub found Sep 16 04:53:04.795002 kernel: hub 1-0:1.0: 4 ports detected Sep 16 04:53:04.795434 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Sep 16 04:53:04.795758 kernel: hub 2-0:1.0: USB hub found Sep 16 04:53:04.796254 kernel: hub 2-0:1.0: 4 ports detected Sep 16 04:53:04.796351 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 16 04:53:04.796362 kernel: GPT:17805311 != 80003071 Sep 16 04:53:04.796370 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 16 04:53:04.796378 kernel: GPT:17805311 != 80003071 Sep 16 04:53:04.796385 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 16 04:53:04.796392 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 16 04:53:04.796404 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 16 04:53:04.805520 kernel: ahci 0000:00:1f.2: version 3.0 Sep 16 04:53:04.805785 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 16 04:53:04.808832 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 16 04:53:04.808970 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 16 04:53:04.809089 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 16 04:53:04.813500 kernel: scsi host1: ahci Sep 16 04:53:04.814651 kernel: scsi host2: ahci Sep 16 04:53:04.818491 kernel: scsi host3: ahci Sep 16 04:53:04.822485 kernel: scsi host4: ahci Sep 16 04:53:04.827483 kernel: scsi host5: ahci Sep 16 04:53:04.831488 kernel: scsi host6: ahci Sep 16 04:53:04.831609 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 49 lpm-pol 1 Sep 16 04:53:04.831620 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 49 lpm-pol 1 Sep 16 04:53:04.831632 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 49 lpm-pol 1 Sep 16 04:53:04.831640 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 49 lpm-pol 1 Sep 16 04:53:04.831647 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 49 lpm-pol 1 Sep 16 04:53:04.831655 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 49 lpm-pol 1 Sep 16 04:53:04.883152 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Sep 16 04:53:04.890389 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:53:04.909008 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Sep 16 04:53:04.916544 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Sep 16 04:53:04.922919 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Sep 16 04:53:04.923451 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Sep 16 04:53:04.926255 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 16 04:53:04.943675 disk-uuid[626]: Primary Header is updated. Sep 16 04:53:04.943675 disk-uuid[626]: Secondary Entries is updated. Sep 16 04:53:04.943675 disk-uuid[626]: Secondary Header is updated. Sep 16 04:53:04.955530 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 16 04:53:04.976501 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 16 04:53:05.033549 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Sep 16 04:53:05.144484 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 16 04:53:05.144557 kernel: ata3: SATA link down (SStatus 0 SControl 300) Sep 16 04:53:05.144567 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 16 04:53:05.144576 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 16 04:53:05.148740 kernel: ata1.00: LPM support broken, forcing max_power Sep 16 04:53:05.148773 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 16 04:53:05.148782 kernel: ata1.00: applying bridge limits Sep 16 04:53:05.149490 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 16 04:53:05.150501 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 16 04:53:05.154058 kernel: ata1.00: LPM support broken, forcing max_power Sep 16 04:53:05.154086 kernel: ata1.00: configured for UDMA/100 Sep 16 04:53:05.154869 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 16 04:53:05.177497 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 16 04:53:05.182618 kernel: usbcore: registered new interface driver usbhid Sep 16 04:53:05.182669 kernel: usbhid: USB HID core driver Sep 16 04:53:05.189869 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Sep 16 04:53:05.189918 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 16 04:53:05.190081 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Sep 16 04:53:05.190208 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 16 04:53:05.220499 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Sep 16 04:53:05.499364 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 16 04:53:05.500703 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 16 04:53:05.501535 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 04:53:05.502944 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 16 04:53:05.505071 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 16 04:53:05.542140 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 16 04:53:05.971315 disk-uuid[627]: The operation has completed successfully. Sep 16 04:53:05.972711 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 16 04:53:06.030297 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 16 04:53:06.030435 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 16 04:53:06.069829 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 16 04:53:06.083494 sh[660]: Success Sep 16 04:53:06.100815 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 16 04:53:06.100887 kernel: device-mapper: uevent: version 1.0.3 Sep 16 04:53:06.101635 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 16 04:53:06.112509 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 16 04:53:06.152288 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 16 04:53:06.155560 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 16 04:53:06.166996 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 16 04:53:06.177493 kernel: BTRFS: device fsid f1b91845-3914-4d21-a370-6d760ee45b2e devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (672) Sep 16 04:53:06.180628 kernel: BTRFS info (device dm-0): first mount of filesystem f1b91845-3914-4d21-a370-6d760ee45b2e Sep 16 04:53:06.180660 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 16 04:53:06.190550 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 16 04:53:06.190584 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 16 04:53:06.193016 kernel: BTRFS info (device dm-0): enabling free space tree Sep 16 04:53:06.194803 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 16 04:53:06.196266 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 16 04:53:06.197658 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 16 04:53:06.198366 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 16 04:53:06.202556 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 16 04:53:06.237582 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (705) Sep 16 04:53:06.241095 kernel: BTRFS info (device sda6): first mount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 04:53:06.241172 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 16 04:53:06.247564 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 16 04:53:06.247609 kernel: BTRFS info (device sda6): turning on async discard Sep 16 04:53:06.249887 kernel: BTRFS info (device sda6): enabling free space tree Sep 16 04:53:06.255514 kernel: BTRFS info (device sda6): last unmount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 04:53:06.256830 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 16 04:53:06.258375 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 16 04:53:06.366947 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 16 04:53:06.372625 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 16 04:53:06.380457 ignition[766]: Ignition 2.22.0 Sep 16 04:53:06.380515 ignition[766]: Stage: fetch-offline Sep 16 04:53:06.380550 ignition[766]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:53:06.380557 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 16 04:53:06.382616 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 16 04:53:06.380631 ignition[766]: parsed url from cmdline: "" Sep 16 04:53:06.380633 ignition[766]: no config URL provided Sep 16 04:53:06.380637 ignition[766]: reading system config file "/usr/lib/ignition/user.ign" Sep 16 04:53:06.380643 ignition[766]: no config at "/usr/lib/ignition/user.ign" Sep 16 04:53:06.380647 ignition[766]: failed to fetch config: resource requires networking Sep 16 04:53:06.380878 ignition[766]: Ignition finished successfully Sep 16 04:53:06.403662 systemd-networkd[845]: lo: Link UP Sep 16 04:53:06.403673 systemd-networkd[845]: lo: Gained carrier Sep 16 04:53:06.405658 systemd-networkd[845]: Enumeration completed Sep 16 04:53:06.405776 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 16 04:53:06.406540 systemd[1]: Reached target network.target - Network. Sep 16 04:53:06.407803 systemd-networkd[845]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:53:06.407808 systemd-networkd[845]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 16 04:53:06.408356 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 16 04:53:06.409639 systemd-networkd[845]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:53:06.409643 systemd-networkd[845]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 16 04:53:06.412294 systemd-networkd[845]: eth0: Link UP Sep 16 04:53:06.412456 systemd-networkd[845]: eth1: Link UP Sep 16 04:53:06.413766 systemd-networkd[845]: eth0: Gained carrier Sep 16 04:53:06.413781 systemd-networkd[845]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:53:06.420890 systemd-networkd[845]: eth1: Gained carrier Sep 16 04:53:06.420911 systemd-networkd[845]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:53:06.433439 ignition[850]: Ignition 2.22.0 Sep 16 04:53:06.433459 ignition[850]: Stage: fetch Sep 16 04:53:06.433670 ignition[850]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:53:06.433681 ignition[850]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 16 04:53:06.433794 ignition[850]: parsed url from cmdline: "" Sep 16 04:53:06.433797 ignition[850]: no config URL provided Sep 16 04:53:06.433803 ignition[850]: reading system config file "/usr/lib/ignition/user.ign" Sep 16 04:53:06.433811 ignition[850]: no config at "/usr/lib/ignition/user.ign" Sep 16 04:53:06.433838 ignition[850]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Sep 16 04:53:06.434030 ignition[850]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Sep 16 04:53:06.463559 systemd-networkd[845]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Sep 16 04:53:06.486557 systemd-networkd[845]: eth0: DHCPv4 address 37.27.208.182/32, gateway 172.31.1.1 acquired from 172.31.1.1 Sep 16 04:53:06.635065 ignition[850]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Sep 16 04:53:06.641488 ignition[850]: GET result: OK Sep 16 04:53:06.641573 ignition[850]: parsing config with SHA512: 67352387a563069f2a056867d0822475cddd58a19efb66abe59d8d1dbd42895b0b376024568adb6ee2d71f5e63067ddc4c247b4966d82cf1bcbe58ccaddd9e32 Sep 16 04:53:06.646658 unknown[850]: fetched base config from "system" Sep 16 04:53:06.647498 unknown[850]: fetched base config from "system" Sep 16 04:53:06.647516 unknown[850]: fetched user config from "hetzner" Sep 16 04:53:06.648108 ignition[850]: fetch: fetch complete Sep 16 04:53:06.648113 ignition[850]: fetch: fetch passed Sep 16 04:53:06.651302 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 16 04:53:06.648186 ignition[850]: Ignition finished successfully Sep 16 04:53:06.652930 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 16 04:53:06.689040 ignition[858]: Ignition 2.22.0 Sep 16 04:53:06.689055 ignition[858]: Stage: kargs Sep 16 04:53:06.689223 ignition[858]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:53:06.689233 ignition[858]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 16 04:53:06.690054 ignition[858]: kargs: kargs passed Sep 16 04:53:06.690103 ignition[858]: Ignition finished successfully Sep 16 04:53:06.694599 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 16 04:53:06.696856 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 16 04:53:06.730508 ignition[865]: Ignition 2.22.0 Sep 16 04:53:06.730518 ignition[865]: Stage: disks Sep 16 04:53:06.730660 ignition[865]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:53:06.730670 ignition[865]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 16 04:53:06.731397 ignition[865]: disks: disks passed Sep 16 04:53:06.732807 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 16 04:53:06.731438 ignition[865]: Ignition finished successfully Sep 16 04:53:06.734194 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 16 04:53:06.735008 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 16 04:53:06.736265 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 16 04:53:06.737310 systemd[1]: Reached target sysinit.target - System Initialization. Sep 16 04:53:06.738619 systemd[1]: Reached target basic.target - Basic System. Sep 16 04:53:06.740790 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 16 04:53:06.784413 systemd-fsck[874]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Sep 16 04:53:06.788350 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 16 04:53:06.790112 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 16 04:53:06.893499 kernel: EXT4-fs (sda9): mounted filesystem fb1cb44f-955b-4cd0-8849-33ce3640d547 r/w with ordered data mode. Quota mode: none. Sep 16 04:53:06.893934 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 16 04:53:06.894871 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 16 04:53:06.896979 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 16 04:53:06.900538 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 16 04:53:06.909573 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 16 04:53:06.911343 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 16 04:53:06.912367 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 16 04:53:06.915623 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 16 04:53:06.918453 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 16 04:53:06.921493 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (882) Sep 16 04:53:06.927953 kernel: BTRFS info (device sda6): first mount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 04:53:06.928001 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 16 04:53:06.939686 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 16 04:53:06.939770 kernel: BTRFS info (device sda6): turning on async discard Sep 16 04:53:06.939786 kernel: BTRFS info (device sda6): enabling free space tree Sep 16 04:53:06.949023 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 16 04:53:06.983701 coreos-metadata[884]: Sep 16 04:53:06.983 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Sep 16 04:53:06.986237 coreos-metadata[884]: Sep 16 04:53:06.984 INFO Fetch successful Sep 16 04:53:06.986237 coreos-metadata[884]: Sep 16 04:53:06.985 INFO wrote hostname ci-4459-0-0-n-200d586c0a to /sysroot/etc/hostname Sep 16 04:53:06.987432 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 16 04:53:06.991208 initrd-setup-root[910]: cut: /sysroot/etc/passwd: No such file or directory Sep 16 04:53:06.997588 initrd-setup-root[917]: cut: /sysroot/etc/group: No such file or directory Sep 16 04:53:07.003039 initrd-setup-root[924]: cut: /sysroot/etc/shadow: No such file or directory Sep 16 04:53:07.007812 initrd-setup-root[931]: cut: /sysroot/etc/gshadow: No such file or directory Sep 16 04:53:07.105493 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 16 04:53:07.107738 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 16 04:53:07.110651 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 16 04:53:07.128513 kernel: BTRFS info (device sda6): last unmount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 04:53:07.142597 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 16 04:53:07.159737 ignition[999]: INFO : Ignition 2.22.0 Sep 16 04:53:07.159737 ignition[999]: INFO : Stage: mount Sep 16 04:53:07.163157 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 04:53:07.163157 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 16 04:53:07.163157 ignition[999]: INFO : mount: mount passed Sep 16 04:53:07.163157 ignition[999]: INFO : Ignition finished successfully Sep 16 04:53:07.162803 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 16 04:53:07.165571 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 16 04:53:07.176165 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 16 04:53:07.184496 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 16 04:53:07.209676 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1012) Sep 16 04:53:07.209745 kernel: BTRFS info (device sda6): first mount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 04:53:07.211923 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 16 04:53:07.220640 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 16 04:53:07.220692 kernel: BTRFS info (device sda6): turning on async discard Sep 16 04:53:07.220732 kernel: BTRFS info (device sda6): enabling free space tree Sep 16 04:53:07.223611 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 16 04:53:07.256389 ignition[1028]: INFO : Ignition 2.22.0 Sep 16 04:53:07.256389 ignition[1028]: INFO : Stage: files Sep 16 04:53:07.257950 ignition[1028]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 04:53:07.257950 ignition[1028]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 16 04:53:07.260104 ignition[1028]: DEBUG : files: compiled without relabeling support, skipping Sep 16 04:53:07.260104 ignition[1028]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 16 04:53:07.260104 ignition[1028]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 16 04:53:07.264083 ignition[1028]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 16 04:53:07.265136 ignition[1028]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 16 04:53:07.266489 unknown[1028]: wrote ssh authorized keys file for user: core Sep 16 04:53:07.267433 ignition[1028]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 16 04:53:07.268920 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 16 04:53:07.275774 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 16 04:53:07.431607 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 16 04:53:07.817001 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 16 04:53:07.818573 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 16 04:53:07.818573 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 16 04:53:07.818573 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 16 04:53:07.818573 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 16 04:53:07.818573 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 16 04:53:07.818573 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 16 04:53:07.818573 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 16 04:53:07.818573 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 16 04:53:07.826239 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 16 04:53:07.826239 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 16 04:53:07.826239 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 16 04:53:07.826239 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 16 04:53:07.826239 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 16 04:53:07.826239 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 16 04:53:08.189652 systemd-networkd[845]: eth0: Gained IPv6LL Sep 16 04:53:08.356123 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 16 04:53:08.381797 systemd-networkd[845]: eth1: Gained IPv6LL Sep 16 04:53:13.242805 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 16 04:53:13.242805 ignition[1028]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 16 04:53:13.245409 ignition[1028]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 16 04:53:13.247737 ignition[1028]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 16 04:53:13.247737 ignition[1028]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 16 04:53:13.247737 ignition[1028]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 16 04:53:13.253032 ignition[1028]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Sep 16 04:53:13.253032 ignition[1028]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Sep 16 04:53:13.253032 ignition[1028]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 16 04:53:13.253032 ignition[1028]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Sep 16 04:53:13.253032 ignition[1028]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Sep 16 04:53:13.253032 ignition[1028]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 16 04:53:13.253032 ignition[1028]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 16 04:53:13.253032 ignition[1028]: INFO : files: files passed Sep 16 04:53:13.253032 ignition[1028]: INFO : Ignition finished successfully Sep 16 04:53:13.250964 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 16 04:53:13.255577 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 16 04:53:13.258401 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 16 04:53:13.273970 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 16 04:53:13.274986 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 16 04:53:13.278059 initrd-setup-root-after-ignition[1059]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 16 04:53:13.278059 initrd-setup-root-after-ignition[1059]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 16 04:53:13.280529 initrd-setup-root-after-ignition[1063]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 16 04:53:13.280874 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 16 04:53:13.282053 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 16 04:53:13.284102 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 16 04:53:13.330193 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 16 04:53:13.330293 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 16 04:53:13.332617 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 16 04:53:13.333919 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 16 04:53:13.334559 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 16 04:53:13.335284 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 16 04:53:13.350397 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 16 04:53:13.353197 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 16 04:53:13.373351 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 16 04:53:13.375271 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 04:53:13.376016 systemd[1]: Stopped target timers.target - Timer Units. Sep 16 04:53:13.377312 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 16 04:53:13.377492 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 16 04:53:13.378922 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 16 04:53:13.379677 systemd[1]: Stopped target basic.target - Basic System. Sep 16 04:53:13.381102 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 16 04:53:13.382241 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 16 04:53:13.383406 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 16 04:53:13.384749 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 16 04:53:13.386081 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 16 04:53:13.387331 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 16 04:53:13.388755 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 16 04:53:13.390001 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 16 04:53:13.391336 systemd[1]: Stopped target swap.target - Swaps. Sep 16 04:53:13.392572 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 16 04:53:13.392742 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 16 04:53:13.394050 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 16 04:53:13.394864 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 04:53:13.396026 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 16 04:53:13.396388 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 04:53:13.397316 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 16 04:53:13.397416 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 16 04:53:13.399177 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 16 04:53:13.399326 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 16 04:53:13.400624 systemd[1]: ignition-files.service: Deactivated successfully. Sep 16 04:53:13.400777 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 16 04:53:13.406324 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 16 04:53:13.406538 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 16 04:53:13.408575 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 16 04:53:13.411673 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 16 04:53:13.413229 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 16 04:53:13.413345 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 04:53:13.415208 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 16 04:53:13.415384 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 16 04:53:13.418861 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 16 04:53:13.420557 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 16 04:53:13.439431 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 16 04:53:13.441424 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 16 04:53:13.441792 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 16 04:53:13.444633 ignition[1083]: INFO : Ignition 2.22.0 Sep 16 04:53:13.444633 ignition[1083]: INFO : Stage: umount Sep 16 04:53:13.445712 ignition[1083]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 04:53:13.445712 ignition[1083]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 16 04:53:13.445712 ignition[1083]: INFO : umount: umount passed Sep 16 04:53:13.445712 ignition[1083]: INFO : Ignition finished successfully Sep 16 04:53:13.446438 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 16 04:53:13.446537 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 16 04:53:13.447650 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 16 04:53:13.447707 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 16 04:53:13.448314 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 16 04:53:13.448347 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 16 04:53:13.449209 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 16 04:53:13.449238 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 16 04:53:13.450117 systemd[1]: Stopped target network.target - Network. Sep 16 04:53:13.450954 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 16 04:53:13.450994 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 16 04:53:13.451874 systemd[1]: Stopped target paths.target - Path Units. Sep 16 04:53:13.452724 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 16 04:53:13.457512 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 04:53:13.458026 systemd[1]: Stopped target slices.target - Slice Units. Sep 16 04:53:13.459141 systemd[1]: Stopped target sockets.target - Socket Units. Sep 16 04:53:13.460174 systemd[1]: iscsid.socket: Deactivated successfully. Sep 16 04:53:13.460207 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 16 04:53:13.461174 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 16 04:53:13.461201 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 16 04:53:13.462019 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 16 04:53:13.462061 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 16 04:53:13.462998 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 16 04:53:13.463033 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 16 04:53:13.463923 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 16 04:53:13.463960 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 16 04:53:13.465038 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 16 04:53:13.465952 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 16 04:53:13.472072 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 16 04:53:13.472174 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 16 04:53:13.475389 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 16 04:53:13.475610 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 16 04:53:13.475714 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 16 04:53:13.478426 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 16 04:53:13.478836 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 16 04:53:13.479675 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 16 04:53:13.479701 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 16 04:53:13.481318 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 16 04:53:13.482793 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 16 04:53:13.482834 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 16 04:53:13.484780 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 16 04:53:13.484814 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:53:13.486188 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 16 04:53:13.486221 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 16 04:53:13.487079 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 16 04:53:13.487115 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 04:53:13.488602 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 04:53:13.490918 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 16 04:53:13.490968 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 16 04:53:13.497089 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 16 04:53:13.497366 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 04:53:13.498393 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 16 04:53:13.498421 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 16 04:53:13.500308 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 16 04:53:13.500331 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 04:53:13.501397 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 16 04:53:13.501431 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 16 04:53:13.503092 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 16 04:53:13.503124 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 16 04:53:13.504454 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 16 04:53:13.504523 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 16 04:53:13.506538 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 16 04:53:13.507733 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 16 04:53:13.507774 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 04:53:13.510590 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 16 04:53:13.510629 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 04:53:13.511954 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 16 04:53:13.511987 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 16 04:53:13.513138 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 16 04:53:13.513168 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 04:53:13.513876 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 04:53:13.513912 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:53:13.521319 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 16 04:53:13.521360 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Sep 16 04:53:13.521386 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 16 04:53:13.521414 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 16 04:53:13.521755 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 16 04:53:13.521817 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 16 04:53:13.522947 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 16 04:53:13.523024 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 16 04:53:13.524383 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 16 04:53:13.525578 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 16 04:53:13.538042 systemd[1]: Switching root. Sep 16 04:53:13.578121 systemd-journald[216]: Journal stopped Sep 16 04:53:14.428829 systemd-journald[216]: Received SIGTERM from PID 1 (systemd). Sep 16 04:53:14.428878 kernel: SELinux: policy capability network_peer_controls=1 Sep 16 04:53:14.428890 kernel: SELinux: policy capability open_perms=1 Sep 16 04:53:14.428897 kernel: SELinux: policy capability extended_socket_class=1 Sep 16 04:53:14.428904 kernel: SELinux: policy capability always_check_network=0 Sep 16 04:53:14.428912 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 16 04:53:14.428919 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 16 04:53:14.428929 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 16 04:53:14.428937 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 16 04:53:14.428946 kernel: SELinux: policy capability userspace_initial_context=0 Sep 16 04:53:14.428956 kernel: audit: type=1403 audit(1757998393.707:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 16 04:53:14.428966 systemd[1]: Successfully loaded SELinux policy in 62.228ms. Sep 16 04:53:14.428983 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.681ms. Sep 16 04:53:14.428996 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 16 04:53:14.429006 systemd[1]: Detected virtualization kvm. Sep 16 04:53:14.429015 systemd[1]: Detected architecture x86-64. Sep 16 04:53:14.429041 systemd[1]: Detected first boot. Sep 16 04:53:14.429051 systemd[1]: Hostname set to . Sep 16 04:53:14.429059 systemd[1]: Initializing machine ID from VM UUID. Sep 16 04:53:14.429068 zram_generator::config[1128]: No configuration found. Sep 16 04:53:14.429087 kernel: Guest personality initialized and is inactive Sep 16 04:53:14.429104 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 16 04:53:14.429113 kernel: Initialized host personality Sep 16 04:53:14.429122 kernel: NET: Registered PF_VSOCK protocol family Sep 16 04:53:14.429130 systemd[1]: Populated /etc with preset unit settings. Sep 16 04:53:14.429139 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 16 04:53:14.429147 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 16 04:53:14.429155 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 16 04:53:14.429167 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 16 04:53:14.429178 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 16 04:53:14.429186 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 16 04:53:14.429195 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 16 04:53:14.429203 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 16 04:53:14.429211 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 16 04:53:14.429220 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 16 04:53:14.429229 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 16 04:53:14.429237 systemd[1]: Created slice user.slice - User and Session Slice. Sep 16 04:53:14.429246 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 04:53:14.429256 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 04:53:14.429264 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 16 04:53:14.429273 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 16 04:53:14.429282 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 16 04:53:14.429290 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 16 04:53:14.429299 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 16 04:53:14.429309 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 04:53:14.429317 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 16 04:53:14.429325 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 16 04:53:14.429333 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 16 04:53:14.429341 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 16 04:53:14.429349 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 16 04:53:14.429358 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 04:53:14.429366 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 16 04:53:14.429375 systemd[1]: Reached target slices.target - Slice Units. Sep 16 04:53:14.429385 systemd[1]: Reached target swap.target - Swaps. Sep 16 04:53:14.429393 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 16 04:53:14.429401 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 16 04:53:14.429410 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 16 04:53:14.429419 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 16 04:53:14.429427 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 16 04:53:14.429435 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 04:53:14.429443 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 16 04:53:14.429452 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 16 04:53:14.429461 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 16 04:53:14.429494 systemd[1]: Mounting media.mount - External Media Directory... Sep 16 04:53:14.429503 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:53:14.429511 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 16 04:53:14.429520 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 16 04:53:14.429528 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 16 04:53:14.429537 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 16 04:53:14.429547 systemd[1]: Reached target machines.target - Containers. Sep 16 04:53:14.429557 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 16 04:53:14.429565 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:53:14.429574 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 16 04:53:14.429582 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 16 04:53:14.429590 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 04:53:14.429599 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 16 04:53:14.429607 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 04:53:14.429615 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 16 04:53:14.429624 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 04:53:14.429644 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 16 04:53:14.429653 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 16 04:53:14.429661 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 16 04:53:14.429669 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 16 04:53:14.429677 systemd[1]: Stopped systemd-fsck-usr.service. Sep 16 04:53:14.429686 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:53:14.429695 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 16 04:53:14.429703 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 16 04:53:14.429717 kernel: loop: module loaded Sep 16 04:53:14.429726 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 16 04:53:14.429737 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 16 04:53:14.429745 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 16 04:53:14.429755 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 16 04:53:14.429765 systemd[1]: verity-setup.service: Deactivated successfully. Sep 16 04:53:14.429773 systemd[1]: Stopped verity-setup.service. Sep 16 04:53:14.429782 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:53:14.429790 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 16 04:53:14.429799 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 16 04:53:14.429808 kernel: fuse: init (API version 7.41) Sep 16 04:53:14.429816 systemd[1]: Mounted media.mount - External Media Directory. Sep 16 04:53:14.429824 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 16 04:53:14.429833 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 16 04:53:14.429841 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 16 04:53:14.429867 systemd-journald[1209]: Collecting audit messages is disabled. Sep 16 04:53:14.429887 systemd-journald[1209]: Journal started Sep 16 04:53:14.429910 systemd-journald[1209]: Runtime Journal (/run/log/journal/9e944a3a22fc401ebd0769b2f37cf7a1) is 4.8M, max 38.6M, 33.7M free. Sep 16 04:53:14.163998 systemd[1]: Queued start job for default target multi-user.target. Sep 16 04:53:14.172363 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 16 04:53:14.172925 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 16 04:53:14.432497 systemd[1]: Started systemd-journald.service - Journal Service. Sep 16 04:53:14.433077 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 16 04:53:14.433888 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 04:53:14.434588 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 16 04:53:14.434779 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 16 04:53:14.435435 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 04:53:14.435622 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 04:53:14.436314 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 04:53:14.436423 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 04:53:14.437352 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 16 04:53:14.437575 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 16 04:53:14.438229 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 04:53:14.438422 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 04:53:14.439325 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 16 04:53:14.440030 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 04:53:14.440859 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 16 04:53:14.441650 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 16 04:53:14.449173 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 16 04:53:14.452538 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 16 04:53:14.455525 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 16 04:53:14.456015 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 16 04:53:14.456042 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 16 04:53:14.457243 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 16 04:53:14.465545 kernel: ACPI: bus type drm_connector registered Sep 16 04:53:14.470164 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 16 04:53:14.471858 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:53:14.473868 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 16 04:53:14.478722 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 16 04:53:14.479434 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 16 04:53:14.483557 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 16 04:53:14.484130 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 16 04:53:14.489097 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 16 04:53:14.495572 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 16 04:53:14.498584 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 16 04:53:14.500870 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 16 04:53:14.501166 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 16 04:53:14.502726 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 16 04:53:14.503770 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 16 04:53:14.530963 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 16 04:53:14.532920 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 16 04:53:14.536692 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 16 04:53:14.541997 systemd-journald[1209]: Time spent on flushing to /var/log/journal/9e944a3a22fc401ebd0769b2f37cf7a1 is 62.906ms for 1175 entries. Sep 16 04:53:14.541997 systemd-journald[1209]: System Journal (/var/log/journal/9e944a3a22fc401ebd0769b2f37cf7a1) is 8M, max 584.8M, 576.8M free. Sep 16 04:53:14.616197 systemd-journald[1209]: Received client request to flush runtime journal. Sep 16 04:53:14.616272 kernel: loop0: detected capacity change from 0 to 224512 Sep 16 04:53:14.557053 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 04:53:14.568024 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:53:14.595233 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Sep 16 04:53:14.595251 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Sep 16 04:53:14.600283 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 16 04:53:14.603726 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 16 04:53:14.610582 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 16 04:53:14.625741 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 16 04:53:14.624937 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 16 04:53:14.654705 kernel: loop1: detected capacity change from 0 to 128016 Sep 16 04:53:14.660917 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 16 04:53:14.662809 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 16 04:53:14.689521 kernel: loop2: detected capacity change from 0 to 8 Sep 16 04:53:14.699673 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Sep 16 04:53:14.699953 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Sep 16 04:53:14.704019 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 04:53:14.707309 kernel: loop3: detected capacity change from 0 to 110984 Sep 16 04:53:14.737500 kernel: loop4: detected capacity change from 0 to 224512 Sep 16 04:53:14.757512 kernel: loop5: detected capacity change from 0 to 128016 Sep 16 04:53:14.778503 kernel: loop6: detected capacity change from 0 to 8 Sep 16 04:53:14.781490 kernel: loop7: detected capacity change from 0 to 110984 Sep 16 04:53:14.795357 (sd-merge)[1280]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Sep 16 04:53:14.796061 (sd-merge)[1280]: Merged extensions into '/usr'. Sep 16 04:53:14.802580 systemd[1]: Reload requested from client PID 1252 ('systemd-sysext') (unit systemd-sysext.service)... Sep 16 04:53:14.802717 systemd[1]: Reloading... Sep 16 04:53:14.859493 zram_generator::config[1306]: No configuration found. Sep 16 04:53:15.014488 ldconfig[1247]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 16 04:53:15.028599 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 16 04:53:15.029035 systemd[1]: Reloading finished in 225 ms. Sep 16 04:53:15.041958 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 16 04:53:15.042932 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 16 04:53:15.053608 systemd[1]: Starting ensure-sysext.service... Sep 16 04:53:15.054834 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 16 04:53:15.064330 systemd[1]: Reload requested from client PID 1349 ('systemctl') (unit ensure-sysext.service)... Sep 16 04:53:15.064436 systemd[1]: Reloading... Sep 16 04:53:15.077614 systemd-tmpfiles[1350]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 16 04:53:15.077655 systemd-tmpfiles[1350]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 16 04:53:15.077859 systemd-tmpfiles[1350]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 16 04:53:15.078397 systemd-tmpfiles[1350]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 16 04:53:15.079846 systemd-tmpfiles[1350]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 16 04:53:15.080105 systemd-tmpfiles[1350]: ACLs are not supported, ignoring. Sep 16 04:53:15.082765 systemd-tmpfiles[1350]: ACLs are not supported, ignoring. Sep 16 04:53:15.084957 systemd-tmpfiles[1350]: Detected autofs mount point /boot during canonicalization of boot. Sep 16 04:53:15.085024 systemd-tmpfiles[1350]: Skipping /boot Sep 16 04:53:15.094255 systemd-tmpfiles[1350]: Detected autofs mount point /boot during canonicalization of boot. Sep 16 04:53:15.094343 systemd-tmpfiles[1350]: Skipping /boot Sep 16 04:53:15.132507 zram_generator::config[1377]: No configuration found. Sep 16 04:53:15.280049 systemd[1]: Reloading finished in 215 ms. Sep 16 04:53:15.301809 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 16 04:53:15.306148 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 04:53:15.313583 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 16 04:53:15.315811 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 16 04:53:15.322020 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 16 04:53:15.326736 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 16 04:53:15.329388 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 04:53:15.331538 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 16 04:53:15.340739 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 16 04:53:15.344180 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:53:15.344339 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:53:15.345879 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 04:53:15.354318 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 04:53:15.362220 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 04:53:15.363198 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:53:15.363615 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:53:15.363732 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:53:15.366220 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 16 04:53:15.368724 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 04:53:15.368850 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 04:53:15.370593 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 04:53:15.370728 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 04:53:15.379823 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:53:15.380181 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:53:15.382226 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 04:53:15.390601 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 04:53:15.391267 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:53:15.391366 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:53:15.395605 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 16 04:53:15.396162 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:53:15.397383 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 16 04:53:15.405215 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:53:15.405373 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:53:15.407563 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 16 04:53:15.408514 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:53:15.408603 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:53:15.408721 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:53:15.409191 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 04:53:15.410144 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 04:53:15.412842 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 04:53:15.413934 augenrules[1461]: No rules Sep 16 04:53:15.413418 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 04:53:15.414881 systemd[1]: audit-rules.service: Deactivated successfully. Sep 16 04:53:15.416277 systemd-udevd[1426]: Using default interface naming scheme 'v255'. Sep 16 04:53:15.417870 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 16 04:53:15.419448 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 16 04:53:15.421046 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 04:53:15.423177 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 04:53:15.424539 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 16 04:53:15.424697 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 16 04:53:15.429896 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 16 04:53:15.432748 systemd[1]: Finished ensure-sysext.service. Sep 16 04:53:15.436151 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 16 04:53:15.438565 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 16 04:53:15.440593 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 16 04:53:15.442710 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 16 04:53:15.443580 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 16 04:53:15.461945 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 04:53:15.464375 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 16 04:53:15.547279 systemd-resolved[1425]: Positive Trust Anchors: Sep 16 04:53:15.547292 systemd-resolved[1425]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 16 04:53:15.547317 systemd-resolved[1425]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 16 04:53:15.555005 systemd-resolved[1425]: Using system hostname 'ci-4459-0-0-n-200d586c0a'. Sep 16 04:53:15.559307 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 16 04:53:15.559881 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 16 04:53:15.560916 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 16 04:53:15.562161 systemd[1]: Reached target sysinit.target - System Initialization. Sep 16 04:53:15.563562 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 16 04:53:15.564554 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 16 04:53:15.565916 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 16 04:53:15.567139 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 16 04:53:15.568342 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 16 04:53:15.568438 systemd[1]: Reached target paths.target - Path Units. Sep 16 04:53:15.569382 systemd[1]: Reached target time-set.target - System Time Set. Sep 16 04:53:15.570262 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 16 04:53:15.571534 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 16 04:53:15.572756 systemd[1]: Reached target timers.target - Timer Units. Sep 16 04:53:15.576998 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 16 04:53:15.580598 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 16 04:53:15.584927 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 16 04:53:15.586737 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 16 04:53:15.588526 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 16 04:53:15.595984 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 16 04:53:15.596947 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 16 04:53:15.599207 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 16 04:53:15.601935 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 16 04:53:15.602957 systemd[1]: Reached target sockets.target - Socket Units. Sep 16 04:53:15.603664 systemd[1]: Reached target basic.target - Basic System. Sep 16 04:53:15.604179 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 16 04:53:15.604261 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 16 04:53:15.606222 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 16 04:53:15.610592 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 16 04:53:15.614891 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 16 04:53:15.617191 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 16 04:53:15.621654 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 16 04:53:15.622522 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 16 04:53:15.627380 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 16 04:53:15.631930 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 16 04:53:15.644412 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 16 04:53:15.647733 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 16 04:53:15.651607 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 16 04:53:15.656521 jq[1520]: false Sep 16 04:53:15.656260 oslogin_cache_refresh[1525]: Refreshing passwd entry cache Sep 16 04:53:15.656853 google_oslogin_nss_cache[1525]: oslogin_cache_refresh[1525]: Refreshing passwd entry cache Sep 16 04:53:15.658267 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 16 04:53:15.663008 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 16 04:53:15.665492 google_oslogin_nss_cache[1525]: oslogin_cache_refresh[1525]: Failure getting users, quitting Sep 16 04:53:15.665492 google_oslogin_nss_cache[1525]: oslogin_cache_refresh[1525]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 16 04:53:15.665492 google_oslogin_nss_cache[1525]: oslogin_cache_refresh[1525]: Refreshing group entry cache Sep 16 04:53:15.665492 google_oslogin_nss_cache[1525]: oslogin_cache_refresh[1525]: Failure getting groups, quitting Sep 16 04:53:15.665492 google_oslogin_nss_cache[1525]: oslogin_cache_refresh[1525]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 16 04:53:15.665186 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 16 04:53:15.664524 oslogin_cache_refresh[1525]: Failure getting users, quitting Sep 16 04:53:15.664540 oslogin_cache_refresh[1525]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 16 04:53:15.664576 oslogin_cache_refresh[1525]: Refreshing group entry cache Sep 16 04:53:15.664928 oslogin_cache_refresh[1525]: Failure getting groups, quitting Sep 16 04:53:15.664935 oslogin_cache_refresh[1525]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 16 04:53:15.667711 systemd[1]: Starting update-engine.service - Update Engine... Sep 16 04:53:15.667879 coreos-metadata[1516]: Sep 16 04:53:15.667 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Sep 16 04:53:15.669562 coreos-metadata[1516]: Sep 16 04:53:15.669 INFO Failed to fetch: error sending request for url (http://169.254.169.254/hetzner/v1/metadata) Sep 16 04:53:15.669937 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 16 04:53:15.672169 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 16 04:53:15.676967 extend-filesystems[1523]: Found /dev/sda6 Sep 16 04:53:15.677657 systemd-networkd[1479]: lo: Link UP Sep 16 04:53:15.677983 systemd-networkd[1479]: lo: Gained carrier Sep 16 04:53:15.678232 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 16 04:53:15.678392 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 16 04:53:15.679783 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 16 04:53:15.679933 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 16 04:53:15.681905 systemd-networkd[1479]: Enumeration completed Sep 16 04:53:15.682992 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 16 04:53:15.684717 systemd[1]: Reached target network.target - Network. Sep 16 04:53:15.687296 extend-filesystems[1523]: Found /dev/sda9 Sep 16 04:53:15.689223 systemd[1]: Starting containerd.service - containerd container runtime... Sep 16 04:53:15.695209 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 16 04:53:15.704628 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 16 04:53:15.706892 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 16 04:53:15.707436 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 16 04:53:15.710167 extend-filesystems[1523]: Checking size of /dev/sda9 Sep 16 04:53:15.736494 jq[1538]: true Sep 16 04:53:15.742157 extend-filesystems[1523]: Resized partition /dev/sda9 Sep 16 04:53:15.744544 update_engine[1537]: I20250916 04:53:15.744128 1537 main.cc:92] Flatcar Update Engine starting Sep 16 04:53:15.748415 tar[1540]: linux-amd64/LICENSE Sep 16 04:53:15.748415 tar[1540]: linux-amd64/helm Sep 16 04:53:15.747764 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 16 04:53:15.747569 dbus-daemon[1517]: [system] SELinux support is enabled Sep 16 04:53:15.754525 extend-filesystems[1568]: resize2fs 1.47.3 (8-Jul-2025) Sep 16 04:53:15.753602 systemd-networkd[1479]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:53:15.753606 systemd-networkd[1479]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 16 04:53:15.758235 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 16 04:53:15.758257 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 16 04:53:15.759134 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 16 04:53:15.759148 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 16 04:53:15.759959 (ntainerd)[1569]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 16 04:53:15.764239 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 16 04:53:15.774068 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Sep 16 04:53:15.774107 jq[1564]: true Sep 16 04:53:15.772175 systemd-networkd[1479]: eth1: Link UP Sep 16 04:53:15.772737 systemd-networkd[1479]: eth1: Gained carrier Sep 16 04:53:15.772758 systemd-networkd[1479]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:53:15.778689 systemd-networkd[1479]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:53:15.778693 systemd-networkd[1479]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 16 04:53:15.779945 systemd-networkd[1479]: eth0: Link UP Sep 16 04:53:15.780754 systemd-networkd[1479]: eth0: Gained carrier Sep 16 04:53:15.780772 systemd-networkd[1479]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:53:15.784212 systemd[1]: Started update-engine.service - Update Engine. Sep 16 04:53:15.787161 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 16 04:53:15.791703 update_engine[1537]: I20250916 04:53:15.785449 1537 update_check_scheduler.cc:74] Next update check in 7m47s Sep 16 04:53:15.795994 systemd[1]: motdgen.service: Deactivated successfully. Sep 16 04:53:15.796178 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 16 04:53:15.806189 systemd-networkd[1479]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Sep 16 04:53:15.807917 systemd-timesyncd[1471]: Network configuration changed, trying to establish connection. Sep 16 04:53:15.840523 systemd-networkd[1479]: eth0: DHCPv4 address 37.27.208.182/32, gateway 172.31.1.1 acquired from 172.31.1.1 Sep 16 04:53:15.842643 systemd-timesyncd[1471]: Network configuration changed, trying to establish connection. Sep 16 04:53:15.844673 systemd-timesyncd[1471]: Network configuration changed, trying to establish connection. Sep 16 04:53:15.859236 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Sep 16 04:53:15.872099 extend-filesystems[1568]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Sep 16 04:53:15.872099 extend-filesystems[1568]: old_desc_blocks = 1, new_desc_blocks = 5 Sep 16 04:53:15.872099 extend-filesystems[1568]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Sep 16 04:53:15.877904 extend-filesystems[1523]: Resized filesystem in /dev/sda9 Sep 16 04:53:15.873077 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 16 04:53:15.873245 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 16 04:53:15.883526 bash[1590]: Updated "/home/core/.ssh/authorized_keys" Sep 16 04:53:15.894096 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 16 04:53:15.899657 systemd[1]: Starting sshkeys.service... Sep 16 04:53:15.940511 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 16 04:53:15.944442 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 16 04:53:16.035718 systemd-logind[1535]: New seat seat0. Sep 16 04:53:16.037305 systemd[1]: Started systemd-logind.service - User Login Management. Sep 16 04:53:16.045281 coreos-metadata[1600]: Sep 16 04:53:16.044 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Sep 16 04:53:16.047627 coreos-metadata[1600]: Sep 16 04:53:16.047 INFO Fetch successful Sep 16 04:53:16.048651 unknown[1600]: wrote ssh authorized keys file for user: core Sep 16 04:53:16.068888 containerd[1569]: time="2025-09-16T04:53:16Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 16 04:53:16.070505 containerd[1569]: time="2025-09-16T04:53:16.069499972Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 16 04:53:16.081211 update-ssh-keys[1610]: Updated "/home/core/.ssh/authorized_keys" Sep 16 04:53:16.082785 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 16 04:53:16.084711 systemd[1]: Finished sshkeys.service. Sep 16 04:53:16.087476 containerd[1569]: time="2025-09-16T04:53:16.086311706Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.599µs" Sep 16 04:53:16.088790 containerd[1569]: time="2025-09-16T04:53:16.088768112Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 16 04:53:16.088864 containerd[1569]: time="2025-09-16T04:53:16.088851097Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 16 04:53:16.089003 containerd[1569]: time="2025-09-16T04:53:16.088988765Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 16 04:53:16.091314 containerd[1569]: time="2025-09-16T04:53:16.090351732Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 16 04:53:16.091314 containerd[1569]: time="2025-09-16T04:53:16.090383391Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 16 04:53:16.091314 containerd[1569]: time="2025-09-16T04:53:16.090436671Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 16 04:53:16.091314 containerd[1569]: time="2025-09-16T04:53:16.090445978Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 16 04:53:16.091314 containerd[1569]: time="2025-09-16T04:53:16.090657465Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 16 04:53:16.091314 containerd[1569]: time="2025-09-16T04:53:16.090670709Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 16 04:53:16.091314 containerd[1569]: time="2025-09-16T04:53:16.090682071Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 16 04:53:16.091314 containerd[1569]: time="2025-09-16T04:53:16.090689514Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 16 04:53:16.091314 containerd[1569]: time="2025-09-16T04:53:16.090749918Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 16 04:53:16.091314 containerd[1569]: time="2025-09-16T04:53:16.090914657Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 16 04:53:16.091314 containerd[1569]: time="2025-09-16T04:53:16.090938732Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 16 04:53:16.091513 containerd[1569]: time="2025-09-16T04:53:16.090946457Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 16 04:53:16.091513 containerd[1569]: time="2025-09-16T04:53:16.090970161Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 16 04:53:16.091513 containerd[1569]: time="2025-09-16T04:53:16.091175857Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 16 04:53:16.091513 containerd[1569]: time="2025-09-16T04:53:16.091219960Z" level=info msg="metadata content store policy set" policy=shared Sep 16 04:53:16.095238 locksmithd[1574]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 16 04:53:16.095494 kernel: mousedev: PS/2 mouse device common for all mice Sep 16 04:53:16.096681 containerd[1569]: time="2025-09-16T04:53:16.095802582Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 16 04:53:16.096681 containerd[1569]: time="2025-09-16T04:53:16.095849470Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 16 04:53:16.096681 containerd[1569]: time="2025-09-16T04:53:16.095866903Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 16 04:53:16.096681 containerd[1569]: time="2025-09-16T04:53:16.095877643Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 16 04:53:16.096681 containerd[1569]: time="2025-09-16T04:53:16.095887652Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 16 04:53:16.096681 containerd[1569]: time="2025-09-16T04:53:16.095940812Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 16 04:53:16.096681 containerd[1569]: time="2025-09-16T04:53:16.095957884Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 16 04:53:16.096681 containerd[1569]: time="2025-09-16T04:53:16.095967692Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 16 04:53:16.096681 containerd[1569]: time="2025-09-16T04:53:16.095975487Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 16 04:53:16.096681 containerd[1569]: time="2025-09-16T04:53:16.095983732Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 16 04:53:16.096681 containerd[1569]: time="2025-09-16T04:53:16.095990454Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 16 04:53:16.096681 containerd[1569]: time="2025-09-16T04:53:16.096004721Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 16 04:53:16.096681 containerd[1569]: time="2025-09-16T04:53:16.096092336Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 16 04:53:16.096681 containerd[1569]: time="2025-09-16T04:53:16.096109788Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 16 04:53:16.096908 containerd[1569]: time="2025-09-16T04:53:16.096124677Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 16 04:53:16.096908 containerd[1569]: time="2025-09-16T04:53:16.096135677Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 16 04:53:16.096908 containerd[1569]: time="2025-09-16T04:53:16.096143842Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 16 04:53:16.096908 containerd[1569]: time="2025-09-16T04:53:16.096151928Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 16 04:53:16.096908 containerd[1569]: time="2025-09-16T04:53:16.096160213Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 16 04:53:16.096908 containerd[1569]: time="2025-09-16T04:53:16.096167626Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 16 04:53:16.096908 containerd[1569]: time="2025-09-16T04:53:16.096175762Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 16 04:53:16.096908 containerd[1569]: time="2025-09-16T04:53:16.096194828Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 16 04:53:16.096908 containerd[1569]: time="2025-09-16T04:53:16.096203604Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 16 04:53:16.096908 containerd[1569]: time="2025-09-16T04:53:16.096256804Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 16 04:53:16.096908 containerd[1569]: time="2025-09-16T04:53:16.096270600Z" level=info msg="Start snapshots syncer" Sep 16 04:53:16.096908 containerd[1569]: time="2025-09-16T04:53:16.096306126Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 16 04:53:16.097100 containerd[1569]: time="2025-09-16T04:53:16.096556817Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 16 04:53:16.097100 containerd[1569]: time="2025-09-16T04:53:16.096598535Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 16 04:53:16.097240 containerd[1569]: time="2025-09-16T04:53:16.097225690Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 16 04:53:16.097399 containerd[1569]: time="2025-09-16T04:53:16.097383577Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 16 04:53:16.098920 containerd[1569]: time="2025-09-16T04:53:16.098507444Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 16 04:53:16.098920 containerd[1569]: time="2025-09-16T04:53:16.098524697Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 16 04:53:16.098920 containerd[1569]: time="2025-09-16T04:53:16.098536739Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 16 04:53:16.098920 containerd[1569]: time="2025-09-16T04:53:16.098551267Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 16 04:53:16.098920 containerd[1569]: time="2025-09-16T04:53:16.098560093Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 16 04:53:16.098920 containerd[1569]: time="2025-09-16T04:53:16.098567837Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 16 04:53:16.098920 containerd[1569]: time="2025-09-16T04:53:16.098585371Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 16 04:53:16.098920 containerd[1569]: time="2025-09-16T04:53:16.098603685Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 16 04:53:16.098920 containerd[1569]: time="2025-09-16T04:53:16.098647577Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 16 04:53:16.098920 containerd[1569]: time="2025-09-16T04:53:16.098682783Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 16 04:53:16.098920 containerd[1569]: time="2025-09-16T04:53:16.098699815Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 16 04:53:16.098920 containerd[1569]: time="2025-09-16T04:53:16.098706878Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 16 04:53:16.098920 containerd[1569]: time="2025-09-16T04:53:16.098714062Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 16 04:53:16.098920 containerd[1569]: time="2025-09-16T04:53:16.098720153Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 16 04:53:16.099132 containerd[1569]: time="2025-09-16T04:53:16.098728960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 16 04:53:16.099132 containerd[1569]: time="2025-09-16T04:53:16.098776789Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 16 04:53:16.099132 containerd[1569]: time="2025-09-16T04:53:16.098789373Z" level=info msg="runtime interface created" Sep 16 04:53:16.099132 containerd[1569]: time="2025-09-16T04:53:16.098793120Z" level=info msg="created NRI interface" Sep 16 04:53:16.099132 containerd[1569]: time="2025-09-16T04:53:16.098798680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 16 04:53:16.099132 containerd[1569]: time="2025-09-16T04:53:16.098807397Z" level=info msg="Connect containerd service" Sep 16 04:53:16.099132 containerd[1569]: time="2025-09-16T04:53:16.098826362Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 16 04:53:16.099834 containerd[1569]: time="2025-09-16T04:53:16.099817270Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 16 04:53:16.142799 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Sep 16 04:53:16.202492 kernel: ACPI: button: Power Button [PWRF] Sep 16 04:53:16.204762 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Sep 16 04:53:16.208688 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Sep 16 04:53:16.254177 containerd[1569]: time="2025-09-16T04:53:16.253959883Z" level=info msg="Start subscribing containerd event" Sep 16 04:53:16.254332 containerd[1569]: time="2025-09-16T04:53:16.254288118Z" level=info msg="Start recovering state" Sep 16 04:53:16.254496 containerd[1569]: time="2025-09-16T04:53:16.254483375Z" level=info msg="Start event monitor" Sep 16 04:53:16.255416 containerd[1569]: time="2025-09-16T04:53:16.255170123Z" level=info msg="Start cni network conf syncer for default" Sep 16 04:53:16.255416 containerd[1569]: time="2025-09-16T04:53:16.255199929Z" level=info msg="Start streaming server" Sep 16 04:53:16.255416 containerd[1569]: time="2025-09-16T04:53:16.255208965Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 16 04:53:16.255416 containerd[1569]: time="2025-09-16T04:53:16.255217121Z" level=info msg="runtime interface starting up..." Sep 16 04:53:16.255416 containerd[1569]: time="2025-09-16T04:53:16.255222531Z" level=info msg="starting plugins..." Sep 16 04:53:16.255416 containerd[1569]: time="2025-09-16T04:53:16.255236237Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 16 04:53:16.258482 containerd[1569]: time="2025-09-16T04:53:16.256062726Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 16 04:53:16.258482 containerd[1569]: time="2025-09-16T04:53:16.256130143Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 16 04:53:16.256257 systemd[1]: Started containerd.service - containerd container runtime. Sep 16 04:53:16.259181 containerd[1569]: time="2025-09-16T04:53:16.258796923Z" level=info msg="containerd successfully booted in 0.190191s" Sep 16 04:53:16.294016 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 16 04:53:16.294328 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 16 04:53:16.311832 sshd_keygen[1547]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 16 04:53:16.331976 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Sep 16 04:53:16.337389 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 16 04:53:16.366980 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 16 04:53:16.384502 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 16 04:53:16.389332 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 16 04:53:16.403344 tar[1540]: linux-amd64/README.md Sep 16 04:53:16.408657 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:53:16.414506 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Sep 16 04:53:16.414556 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Sep 16 04:53:16.416575 systemd[1]: issuegen.service: Deactivated successfully. Sep 16 04:53:16.416768 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 16 04:53:16.421185 kernel: Console: switching to colour dummy device 80x25 Sep 16 04:53:16.421221 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Sep 16 04:53:16.421236 kernel: [drm] features: -context_init Sep 16 04:53:16.421455 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 16 04:53:16.422854 kernel: [drm] number of scanouts: 1 Sep 16 04:53:16.422878 kernel: [drm] number of cap sets: 0 Sep 16 04:53:16.424114 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Sep 16 04:53:16.424632 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 16 04:53:16.425668 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Sep 16 04:53:16.425691 kernel: Console: switching to colour frame buffer device 160x50 Sep 16 04:53:16.437013 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Sep 16 04:53:16.447109 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 16 04:53:16.451637 systemd[1]: Started sshd@0-37.27.208.182:22-139.178.89.65:48534.service - OpenSSH per-connection server daemon (139.178.89.65:48534). Sep 16 04:53:16.459211 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 16 04:53:16.461224 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 16 04:53:16.465280 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 16 04:53:16.467675 systemd[1]: Reached target getty.target - Login Prompts. Sep 16 04:53:16.478487 kernel: EDAC MC: Ver: 3.0.0 Sep 16 04:53:16.515181 systemd-logind[1535]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 16 04:53:16.526354 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:53:16.536585 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 04:53:16.536740 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:53:16.540380 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:53:16.543089 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:53:16.545815 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 16 04:53:16.555853 systemd-logind[1535]: Watching system buttons on /dev/input/event3 (Power Button) Sep 16 04:53:16.567166 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 04:53:16.567801 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:53:16.569894 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 16 04:53:16.572367 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:53:16.602791 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:53:16.669789 coreos-metadata[1516]: Sep 16 04:53:16.669 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #2 Sep 16 04:53:16.670684 coreos-metadata[1516]: Sep 16 04:53:16.670 INFO Fetch successful Sep 16 04:53:16.670763 coreos-metadata[1516]: Sep 16 04:53:16.670 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Sep 16 04:53:16.671999 coreos-metadata[1516]: Sep 16 04:53:16.671 INFO Fetch successful Sep 16 04:53:16.722232 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 16 04:53:16.723102 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 16 04:53:16.957670 systemd-networkd[1479]: eth0: Gained IPv6LL Sep 16 04:53:16.958567 systemd-timesyncd[1471]: Network configuration changed, trying to establish connection. Sep 16 04:53:16.960192 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 16 04:53:16.960972 systemd[1]: Reached target network-online.target - Network is Online. Sep 16 04:53:16.966906 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:53:16.969550 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 16 04:53:16.997974 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 16 04:53:17.451117 sshd[1668]: Accepted publickey for core from 139.178.89.65 port 48534 ssh2: RSA SHA256:ukQ34xonoknF08dP0xLAU5hfihSV0h8HVu+YH+vjyGk Sep 16 04:53:17.452155 sshd-session[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:53:17.464239 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 16 04:53:17.466766 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 16 04:53:17.469342 systemd-logind[1535]: New session 1 of user core. Sep 16 04:53:17.486496 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 16 04:53:17.492547 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 16 04:53:17.509586 (systemd)[1715]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 16 04:53:17.512024 systemd-logind[1535]: New session c1 of user core. Sep 16 04:53:17.629423 systemd[1715]: Queued start job for default target default.target. Sep 16 04:53:17.634554 systemd[1715]: Created slice app.slice - User Application Slice. Sep 16 04:53:17.634576 systemd[1715]: Reached target paths.target - Paths. Sep 16 04:53:17.634702 systemd[1715]: Reached target timers.target - Timers. Sep 16 04:53:17.636580 systemd[1715]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 16 04:53:17.645289 systemd[1715]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 16 04:53:17.645550 systemd[1715]: Reached target sockets.target - Sockets. Sep 16 04:53:17.645718 systemd[1715]: Reached target basic.target - Basic System. Sep 16 04:53:17.645806 systemd[1715]: Reached target default.target - Main User Target. Sep 16 04:53:17.645885 systemd[1715]: Startup finished in 128ms. Sep 16 04:53:17.645949 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 16 04:53:17.652590 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 16 04:53:17.661660 systemd-networkd[1479]: eth1: Gained IPv6LL Sep 16 04:53:17.662756 systemd-timesyncd[1471]: Network configuration changed, trying to establish connection. Sep 16 04:53:17.849639 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:53:17.850620 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 16 04:53:17.852752 systemd[1]: Startup finished in 3.020s (kernel) + 10.047s (initrd) + 4.206s (userspace) = 17.274s. Sep 16 04:53:17.856748 (kubelet)[1729]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:53:18.374197 systemd[1]: Started sshd@1-37.27.208.182:22-139.178.89.65:48536.service - OpenSSH per-connection server daemon (139.178.89.65:48536). Sep 16 04:53:18.407432 kubelet[1729]: E0916 04:53:18.407392 1729 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:53:18.412612 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:53:18.412825 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:53:18.413544 systemd[1]: kubelet.service: Consumed 907ms CPU time, 264.6M memory peak. Sep 16 04:53:19.479092 sshd[1741]: Accepted publickey for core from 139.178.89.65 port 48536 ssh2: RSA SHA256:ukQ34xonoknF08dP0xLAU5hfihSV0h8HVu+YH+vjyGk Sep 16 04:53:19.481332 sshd-session[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:53:19.490045 systemd-logind[1535]: New session 2 of user core. Sep 16 04:53:19.495703 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 16 04:53:20.222506 sshd[1745]: Connection closed by 139.178.89.65 port 48536 Sep 16 04:53:20.223322 sshd-session[1741]: pam_unix(sshd:session): session closed for user core Sep 16 04:53:20.229306 systemd[1]: sshd@1-37.27.208.182:22-139.178.89.65:48536.service: Deactivated successfully. Sep 16 04:53:20.233024 systemd[1]: session-2.scope: Deactivated successfully. Sep 16 04:53:20.234825 systemd-logind[1535]: Session 2 logged out. Waiting for processes to exit. Sep 16 04:53:20.238100 systemd-logind[1535]: Removed session 2. Sep 16 04:53:20.385428 systemd[1]: Started sshd@2-37.27.208.182:22-139.178.89.65:41588.service - OpenSSH per-connection server daemon (139.178.89.65:41588). Sep 16 04:53:21.364286 sshd[1751]: Accepted publickey for core from 139.178.89.65 port 41588 ssh2: RSA SHA256:ukQ34xonoknF08dP0xLAU5hfihSV0h8HVu+YH+vjyGk Sep 16 04:53:21.366709 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:53:21.375593 systemd-logind[1535]: New session 3 of user core. Sep 16 04:53:21.386797 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 16 04:53:22.030312 sshd[1754]: Connection closed by 139.178.89.65 port 41588 Sep 16 04:53:22.030916 sshd-session[1751]: pam_unix(sshd:session): session closed for user core Sep 16 04:53:22.034918 systemd-logind[1535]: Session 3 logged out. Waiting for processes to exit. Sep 16 04:53:22.035646 systemd[1]: sshd@2-37.27.208.182:22-139.178.89.65:41588.service: Deactivated successfully. Sep 16 04:53:22.037224 systemd[1]: session-3.scope: Deactivated successfully. Sep 16 04:53:22.038918 systemd-logind[1535]: Removed session 3. Sep 16 04:53:22.199620 systemd[1]: Started sshd@3-37.27.208.182:22-139.178.89.65:41598.service - OpenSSH per-connection server daemon (139.178.89.65:41598). Sep 16 04:53:23.180945 sshd[1760]: Accepted publickey for core from 139.178.89.65 port 41598 ssh2: RSA SHA256:ukQ34xonoknF08dP0xLAU5hfihSV0h8HVu+YH+vjyGk Sep 16 04:53:23.182345 sshd-session[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:53:23.187379 systemd-logind[1535]: New session 4 of user core. Sep 16 04:53:23.192698 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 16 04:53:23.856103 sshd[1763]: Connection closed by 139.178.89.65 port 41598 Sep 16 04:53:23.857005 sshd-session[1760]: pam_unix(sshd:session): session closed for user core Sep 16 04:53:23.862117 systemd[1]: sshd@3-37.27.208.182:22-139.178.89.65:41598.service: Deactivated successfully. Sep 16 04:53:23.864709 systemd[1]: session-4.scope: Deactivated successfully. Sep 16 04:53:23.866954 systemd-logind[1535]: Session 4 logged out. Waiting for processes to exit. Sep 16 04:53:23.869710 systemd-logind[1535]: Removed session 4. Sep 16 04:53:24.025089 systemd[1]: Started sshd@4-37.27.208.182:22-139.178.89.65:41614.service - OpenSSH per-connection server daemon (139.178.89.65:41614). Sep 16 04:53:25.012344 sshd[1769]: Accepted publickey for core from 139.178.89.65 port 41614 ssh2: RSA SHA256:ukQ34xonoknF08dP0xLAU5hfihSV0h8HVu+YH+vjyGk Sep 16 04:53:25.013668 sshd-session[1769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:53:25.018615 systemd-logind[1535]: New session 5 of user core. Sep 16 04:53:25.025612 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 16 04:53:25.535232 sudo[1773]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 16 04:53:25.535458 sudo[1773]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:53:25.557243 sudo[1773]: pam_unix(sudo:session): session closed for user root Sep 16 04:53:25.714535 sshd[1772]: Connection closed by 139.178.89.65 port 41614 Sep 16 04:53:25.715286 sshd-session[1769]: pam_unix(sshd:session): session closed for user core Sep 16 04:53:25.719605 systemd-logind[1535]: Session 5 logged out. Waiting for processes to exit. Sep 16 04:53:25.719795 systemd[1]: sshd@4-37.27.208.182:22-139.178.89.65:41614.service: Deactivated successfully. Sep 16 04:53:25.721394 systemd[1]: session-5.scope: Deactivated successfully. Sep 16 04:53:25.722779 systemd-logind[1535]: Removed session 5. Sep 16 04:53:25.924649 systemd[1]: Started sshd@5-37.27.208.182:22-139.178.89.65:41616.service - OpenSSH per-connection server daemon (139.178.89.65:41616). Sep 16 04:53:27.021587 sshd[1779]: Accepted publickey for core from 139.178.89.65 port 41616 ssh2: RSA SHA256:ukQ34xonoknF08dP0xLAU5hfihSV0h8HVu+YH+vjyGk Sep 16 04:53:27.022932 sshd-session[1779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:53:27.028902 systemd-logind[1535]: New session 6 of user core. Sep 16 04:53:27.041705 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 16 04:53:27.593304 sudo[1784]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 16 04:53:27.593606 sudo[1784]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:53:27.598792 sudo[1784]: pam_unix(sudo:session): session closed for user root Sep 16 04:53:27.603554 sudo[1783]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 16 04:53:27.603785 sudo[1783]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:53:27.612779 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 16 04:53:27.644817 augenrules[1806]: No rules Sep 16 04:53:27.645886 systemd[1]: audit-rules.service: Deactivated successfully. Sep 16 04:53:27.646084 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 16 04:53:27.647677 sudo[1783]: pam_unix(sudo:session): session closed for user root Sep 16 04:53:27.823293 sshd[1782]: Connection closed by 139.178.89.65 port 41616 Sep 16 04:53:27.823835 sshd-session[1779]: pam_unix(sshd:session): session closed for user core Sep 16 04:53:27.826916 systemd[1]: sshd@5-37.27.208.182:22-139.178.89.65:41616.service: Deactivated successfully. Sep 16 04:53:27.829044 systemd[1]: session-6.scope: Deactivated successfully. Sep 16 04:53:27.830151 systemd-logind[1535]: Session 6 logged out. Waiting for processes to exit. Sep 16 04:53:27.831330 systemd-logind[1535]: Removed session 6. Sep 16 04:53:28.020336 systemd[1]: Started sshd@6-37.27.208.182:22-139.178.89.65:41622.service - OpenSSH per-connection server daemon (139.178.89.65:41622). Sep 16 04:53:28.530903 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 16 04:53:28.533687 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:53:28.669229 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:53:28.671766 (kubelet)[1826]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:53:28.703286 kubelet[1826]: E0916 04:53:28.703214 1826 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:53:28.706664 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:53:28.706784 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:53:28.707180 systemd[1]: kubelet.service: Consumed 134ms CPU time, 108.6M memory peak. Sep 16 04:53:29.118884 sshd[1815]: Accepted publickey for core from 139.178.89.65 port 41622 ssh2: RSA SHA256:ukQ34xonoknF08dP0xLAU5hfihSV0h8HVu+YH+vjyGk Sep 16 04:53:29.121309 sshd-session[1815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:53:29.131690 systemd-logind[1535]: New session 7 of user core. Sep 16 04:53:29.134687 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 16 04:53:29.693359 sudo[1834]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 16 04:53:29.693799 sudo[1834]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:53:30.152425 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 16 04:53:30.175757 (dockerd)[1852]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 16 04:53:30.454316 dockerd[1852]: time="2025-09-16T04:53:30.453654517Z" level=info msg="Starting up" Sep 16 04:53:30.454972 dockerd[1852]: time="2025-09-16T04:53:30.454940488Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 16 04:53:30.466364 dockerd[1852]: time="2025-09-16T04:53:30.466298065Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 16 04:53:30.487402 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4203033162-merged.mount: Deactivated successfully. Sep 16 04:53:30.495034 systemd[1]: var-lib-docker-metacopy\x2dcheck1144676242-merged.mount: Deactivated successfully. Sep 16 04:53:30.523489 dockerd[1852]: time="2025-09-16T04:53:30.523359991Z" level=info msg="Loading containers: start." Sep 16 04:53:30.535507 kernel: Initializing XFRM netlink socket Sep 16 04:53:30.697020 systemd-timesyncd[1471]: Network configuration changed, trying to establish connection. Sep 16 04:53:30.732379 systemd-networkd[1479]: docker0: Link UP Sep 16 04:53:30.738449 dockerd[1852]: time="2025-09-16T04:53:30.738412527Z" level=info msg="Loading containers: done." Sep 16 04:53:31.312776 systemd-resolved[1425]: Clock change detected. Flushing caches. Sep 16 04:53:31.313330 systemd-timesyncd[1471]: Contacted time server 193.158.22.13:123 (2.flatcar.pool.ntp.org). Sep 16 04:53:31.313387 systemd-timesyncd[1471]: Initial clock synchronization to Tue 2025-09-16 04:53:31.312373 UTC. Sep 16 04:53:31.321590 dockerd[1852]: time="2025-09-16T04:53:31.321556402Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 16 04:53:31.321676 dockerd[1852]: time="2025-09-16T04:53:31.321623217Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 16 04:53:31.321698 dockerd[1852]: time="2025-09-16T04:53:31.321683721Z" level=info msg="Initializing buildkit" Sep 16 04:53:31.343300 dockerd[1852]: time="2025-09-16T04:53:31.343255671Z" level=info msg="Completed buildkit initialization" Sep 16 04:53:31.348871 dockerd[1852]: time="2025-09-16T04:53:31.348806319Z" level=info msg="Daemon has completed initialization" Sep 16 04:53:31.349304 dockerd[1852]: time="2025-09-16T04:53:31.348973963Z" level=info msg="API listen on /run/docker.sock" Sep 16 04:53:31.349229 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 16 04:53:32.053113 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1729158722-merged.mount: Deactivated successfully. Sep 16 04:53:32.426509 containerd[1569]: time="2025-09-16T04:53:32.426440783Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 16 04:53:32.876128 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2748812411.mount: Deactivated successfully. Sep 16 04:53:34.279913 containerd[1569]: time="2025-09-16T04:53:34.279845225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:53:34.281254 containerd[1569]: time="2025-09-16T04:53:34.281213170Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28838016" Sep 16 04:53:34.282163 containerd[1569]: time="2025-09-16T04:53:34.281753534Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:53:34.284862 containerd[1569]: time="2025-09-16T04:53:34.284826075Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:53:34.285917 containerd[1569]: time="2025-09-16T04:53:34.285886032Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 1.859370078s" Sep 16 04:53:34.286000 containerd[1569]: time="2025-09-16T04:53:34.285986821Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Sep 16 04:53:34.287209 containerd[1569]: time="2025-09-16T04:53:34.287180659Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 16 04:53:35.839803 containerd[1569]: time="2025-09-16T04:53:35.839736530Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:53:35.840938 containerd[1569]: time="2025-09-16T04:53:35.840905371Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787049" Sep 16 04:53:35.841625 containerd[1569]: time="2025-09-16T04:53:35.841575899Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:53:35.844527 containerd[1569]: time="2025-09-16T04:53:35.844474304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:53:35.845426 containerd[1569]: time="2025-09-16T04:53:35.845403146Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.558137678s" Sep 16 04:53:35.845523 containerd[1569]: time="2025-09-16T04:53:35.845509124Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Sep 16 04:53:35.846075 containerd[1569]: time="2025-09-16T04:53:35.846034068Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 16 04:53:36.856332 containerd[1569]: time="2025-09-16T04:53:36.856262320Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:53:36.857487 containerd[1569]: time="2025-09-16T04:53:36.857220336Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176311" Sep 16 04:53:36.858248 containerd[1569]: time="2025-09-16T04:53:36.858226733Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:53:36.860485 containerd[1569]: time="2025-09-16T04:53:36.860463567Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:53:36.861154 containerd[1569]: time="2025-09-16T04:53:36.861128845Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.015057005s" Sep 16 04:53:36.861191 containerd[1569]: time="2025-09-16T04:53:36.861159973Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Sep 16 04:53:36.861926 containerd[1569]: time="2025-09-16T04:53:36.861901483Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 16 04:53:37.820808 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount986186278.mount: Deactivated successfully. Sep 16 04:53:38.112849 containerd[1569]: time="2025-09-16T04:53:38.112785017Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:53:38.114052 containerd[1569]: time="2025-09-16T04:53:38.113854823Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924234" Sep 16 04:53:38.114941 containerd[1569]: time="2025-09-16T04:53:38.114914501Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:53:38.116578 containerd[1569]: time="2025-09-16T04:53:38.116552953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:53:38.117081 containerd[1569]: time="2025-09-16T04:53:38.117050495Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 1.25512125s" Sep 16 04:53:38.117193 containerd[1569]: time="2025-09-16T04:53:38.117174849Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Sep 16 04:53:38.118027 containerd[1569]: time="2025-09-16T04:53:38.117748554Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 16 04:53:38.591217 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3031349846.mount: Deactivated successfully. Sep 16 04:53:39.350448 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 16 04:53:39.353061 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:53:39.433604 containerd[1569]: time="2025-09-16T04:53:39.433561999Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:53:39.435155 containerd[1569]: time="2025-09-16T04:53:39.435110021Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565335" Sep 16 04:53:39.436340 containerd[1569]: time="2025-09-16T04:53:39.436282069Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:53:39.443330 containerd[1569]: time="2025-09-16T04:53:39.443229797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:53:39.444715 containerd[1569]: time="2025-09-16T04:53:39.444473439Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.326695249s" Sep 16 04:53:39.444715 containerd[1569]: time="2025-09-16T04:53:39.444563849Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 16 04:53:39.445364 containerd[1569]: time="2025-09-16T04:53:39.445028139Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 16 04:53:39.453431 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:53:39.464685 (kubelet)[2197]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:53:39.506458 kubelet[2197]: E0916 04:53:39.506381 2197 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:53:39.508657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:53:39.509243 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:53:39.509726 systemd[1]: kubelet.service: Consumed 117ms CPU time, 109.2M memory peak. Sep 16 04:53:39.870589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3825762757.mount: Deactivated successfully. Sep 16 04:53:39.876152 containerd[1569]: time="2025-09-16T04:53:39.876092178Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 04:53:39.876993 containerd[1569]: time="2025-09-16T04:53:39.876806006Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" Sep 16 04:53:39.877819 containerd[1569]: time="2025-09-16T04:53:39.877790793Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 04:53:39.879684 containerd[1569]: time="2025-09-16T04:53:39.879652935Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 04:53:39.880271 containerd[1569]: time="2025-09-16T04:53:39.880249403Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 435.199243ms" Sep 16 04:53:39.880343 containerd[1569]: time="2025-09-16T04:53:39.880330625Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 16 04:53:39.880891 containerd[1569]: time="2025-09-16T04:53:39.880837947Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 16 04:53:40.353339 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1596398725.mount: Deactivated successfully. Sep 16 04:53:41.847439 containerd[1569]: time="2025-09-16T04:53:41.846677306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:53:41.848641 containerd[1569]: time="2025-09-16T04:53:41.848595963Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682132" Sep 16 04:53:41.850514 containerd[1569]: time="2025-09-16T04:53:41.848954446Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:53:41.859823 containerd[1569]: time="2025-09-16T04:53:41.859787970Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:53:41.860522 containerd[1569]: time="2025-09-16T04:53:41.860465520Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 1.979492831s" Sep 16 04:53:41.860567 containerd[1569]: time="2025-09-16T04:53:41.860531734Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 16 04:53:44.280379 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:53:44.280549 systemd[1]: kubelet.service: Consumed 117ms CPU time, 109.2M memory peak. Sep 16 04:53:44.282728 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:53:44.313611 systemd[1]: Reload requested from client PID 2289 ('systemctl') (unit session-7.scope)... Sep 16 04:53:44.313745 systemd[1]: Reloading... Sep 16 04:53:44.389943 zram_generator::config[2329]: No configuration found. Sep 16 04:53:44.580263 systemd[1]: Reloading finished in 266 ms. Sep 16 04:53:44.632945 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:53:44.636235 systemd[1]: kubelet.service: Deactivated successfully. Sep 16 04:53:44.636436 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:53:44.636470 systemd[1]: kubelet.service: Consumed 80ms CPU time, 98.5M memory peak. Sep 16 04:53:44.638806 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:53:44.754033 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:53:44.762992 (kubelet)[2389]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 16 04:53:44.800706 kubelet[2389]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:53:44.800706 kubelet[2389]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 16 04:53:44.800706 kubelet[2389]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:53:44.801394 kubelet[2389]: I0916 04:53:44.800754 2389 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 16 04:53:45.086717 kubelet[2389]: I0916 04:53:45.086665 2389 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 16 04:53:45.086717 kubelet[2389]: I0916 04:53:45.086708 2389 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 16 04:53:45.087420 kubelet[2389]: I0916 04:53:45.087257 2389 server.go:954] "Client rotation is on, will bootstrap in background" Sep 16 04:53:45.119378 kubelet[2389]: I0916 04:53:45.119331 2389 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 16 04:53:45.121120 kubelet[2389]: E0916 04:53:45.121093 2389 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://37.27.208.182:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 37.27.208.182:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:53:45.127382 kubelet[2389]: I0916 04:53:45.127256 2389 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 16 04:53:45.131902 kubelet[2389]: I0916 04:53:45.131863 2389 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 16 04:53:45.135292 kubelet[2389]: I0916 04:53:45.135230 2389 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 16 04:53:45.135477 kubelet[2389]: I0916 04:53:45.135277 2389 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-0-0-n-200d586c0a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 16 04:53:45.137222 kubelet[2389]: I0916 04:53:45.137190 2389 topology_manager.go:138] "Creating topology manager with none policy" Sep 16 04:53:45.137222 kubelet[2389]: I0916 04:53:45.137211 2389 container_manager_linux.go:304] "Creating device plugin manager" Sep 16 04:53:45.138483 kubelet[2389]: I0916 04:53:45.138451 2389 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:53:45.143545 kubelet[2389]: I0916 04:53:45.143321 2389 kubelet.go:446] "Attempting to sync node with API server" Sep 16 04:53:45.143545 kubelet[2389]: I0916 04:53:45.143366 2389 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 16 04:53:45.144843 kubelet[2389]: I0916 04:53:45.144637 2389 kubelet.go:352] "Adding apiserver pod source" Sep 16 04:53:45.144843 kubelet[2389]: I0916 04:53:45.144661 2389 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 16 04:53:45.151048 kubelet[2389]: W0916 04:53:45.150900 2389 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://37.27.208.182:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-0-0-n-200d586c0a&limit=500&resourceVersion=0": dial tcp 37.27.208.182:6443: connect: connection refused Sep 16 04:53:45.151048 kubelet[2389]: E0916 04:53:45.150971 2389 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://37.27.208.182:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-0-0-n-200d586c0a&limit=500&resourceVersion=0\": dial tcp 37.27.208.182:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:53:45.151390 kubelet[2389]: W0916 04:53:45.151339 2389 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://37.27.208.182:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 37.27.208.182:6443: connect: connection refused Sep 16 04:53:45.151434 kubelet[2389]: E0916 04:53:45.151398 2389 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://37.27.208.182:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 37.27.208.182:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:53:45.151810 kubelet[2389]: I0916 04:53:45.151786 2389 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 16 04:53:45.155952 kubelet[2389]: I0916 04:53:45.155921 2389 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 16 04:53:45.156612 kubelet[2389]: W0916 04:53:45.156588 2389 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 16 04:53:45.161769 kubelet[2389]: I0916 04:53:45.161748 2389 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 16 04:53:45.161879 kubelet[2389]: I0916 04:53:45.161870 2389 server.go:1287] "Started kubelet" Sep 16 04:53:45.173743 kubelet[2389]: I0916 04:53:45.173669 2389 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 16 04:53:45.174477 kubelet[2389]: I0916 04:53:45.174026 2389 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 16 04:53:45.177621 kubelet[2389]: I0916 04:53:45.177123 2389 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 16 04:53:45.178547 kubelet[2389]: E0916 04:53:45.176009 2389 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://37.27.208.182:6443/api/v1/namespaces/default/events\": dial tcp 37.27.208.182:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-0-0-n-200d586c0a.1865aa40df77cf25 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-0-0-n-200d586c0a,UID:ci-4459-0-0-n-200d586c0a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-0-0-n-200d586c0a,},FirstTimestamp:2025-09-16 04:53:45.161846565 +0000 UTC m=+0.395578511,LastTimestamp:2025-09-16 04:53:45.161846565 +0000 UTC m=+0.395578511,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-0-0-n-200d586c0a,}" Sep 16 04:53:45.179932 kubelet[2389]: I0916 04:53:45.179907 2389 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 16 04:53:45.181307 kubelet[2389]: I0916 04:53:45.181293 2389 server.go:479] "Adding debug handlers to kubelet server" Sep 16 04:53:45.182243 kubelet[2389]: I0916 04:53:45.182226 2389 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 16 04:53:45.183949 kubelet[2389]: I0916 04:53:45.183917 2389 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 16 04:53:45.184226 kubelet[2389]: E0916 04:53:45.184187 2389 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-0-0-n-200d586c0a\" not found" Sep 16 04:53:45.186844 kubelet[2389]: I0916 04:53:45.186826 2389 factory.go:221] Registration of the systemd container factory successfully Sep 16 04:53:45.187108 kubelet[2389]: I0916 04:53:45.187089 2389 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 16 04:53:45.187607 kubelet[2389]: I0916 04:53:45.187575 2389 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 16 04:53:45.187673 kubelet[2389]: I0916 04:53:45.187638 2389 reconciler.go:26] "Reconciler: start to sync state" Sep 16 04:53:45.187752 kubelet[2389]: E0916 04:53:45.187726 2389 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://37.27.208.182:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-0-0-n-200d586c0a?timeout=10s\": dial tcp 37.27.208.182:6443: connect: connection refused" interval="200ms" Sep 16 04:53:45.189489 kubelet[2389]: W0916 04:53:45.189100 2389 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://37.27.208.182:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 37.27.208.182:6443: connect: connection refused Sep 16 04:53:45.189489 kubelet[2389]: E0916 04:53:45.189142 2389 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://37.27.208.182:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 37.27.208.182:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:53:45.189775 kubelet[2389]: I0916 04:53:45.189762 2389 factory.go:221] Registration of the containerd container factory successfully Sep 16 04:53:45.194786 kubelet[2389]: E0916 04:53:45.194764 2389 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 16 04:53:45.197550 kubelet[2389]: I0916 04:53:45.197520 2389 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 16 04:53:45.199048 kubelet[2389]: I0916 04:53:45.199035 2389 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 16 04:53:45.199115 kubelet[2389]: I0916 04:53:45.199108 2389 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 16 04:53:45.199170 kubelet[2389]: I0916 04:53:45.199163 2389 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 16 04:53:45.199216 kubelet[2389]: I0916 04:53:45.199210 2389 kubelet.go:2382] "Starting kubelet main sync loop" Sep 16 04:53:45.199292 kubelet[2389]: E0916 04:53:45.199278 2389 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 16 04:53:45.206589 kubelet[2389]: W0916 04:53:45.206555 2389 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://37.27.208.182:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 37.27.208.182:6443: connect: connection refused Sep 16 04:53:45.206788 kubelet[2389]: E0916 04:53:45.206738 2389 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://37.27.208.182:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 37.27.208.182:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:53:45.215763 kubelet[2389]: I0916 04:53:45.215695 2389 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 16 04:53:45.216044 kubelet[2389]: I0916 04:53:45.215848 2389 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 16 04:53:45.216044 kubelet[2389]: I0916 04:53:45.215867 2389 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:53:45.217315 kubelet[2389]: I0916 04:53:45.217304 2389 policy_none.go:49] "None policy: Start" Sep 16 04:53:45.217516 kubelet[2389]: I0916 04:53:45.217377 2389 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 16 04:53:45.217516 kubelet[2389]: I0916 04:53:45.217390 2389 state_mem.go:35] "Initializing new in-memory state store" Sep 16 04:53:45.222369 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 16 04:53:45.236641 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 16 04:53:45.240532 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 16 04:53:45.259667 kubelet[2389]: I0916 04:53:45.259569 2389 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 16 04:53:45.259955 kubelet[2389]: I0916 04:53:45.259835 2389 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 16 04:53:45.259955 kubelet[2389]: I0916 04:53:45.259855 2389 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 16 04:53:45.260761 kubelet[2389]: I0916 04:53:45.260630 2389 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 16 04:53:45.262199 kubelet[2389]: E0916 04:53:45.262163 2389 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 16 04:53:45.262363 kubelet[2389]: E0916 04:53:45.262319 2389 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-0-0-n-200d586c0a\" not found" Sep 16 04:53:45.315884 systemd[1]: Created slice kubepods-burstable-podffe1107ca8b85de1052f392ffe6a0ff2.slice - libcontainer container kubepods-burstable-podffe1107ca8b85de1052f392ffe6a0ff2.slice. Sep 16 04:53:45.332764 kubelet[2389]: E0916 04:53:45.332535 2389 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-0-0-n-200d586c0a\" not found" node="ci-4459-0-0-n-200d586c0a" Sep 16 04:53:45.337376 systemd[1]: Created slice kubepods-burstable-pod92a6f4bc4b2c5a471d9ff53f8f9e3bb8.slice - libcontainer container kubepods-burstable-pod92a6f4bc4b2c5a471d9ff53f8f9e3bb8.slice. Sep 16 04:53:45.341900 kubelet[2389]: E0916 04:53:45.341848 2389 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-0-0-n-200d586c0a\" not found" node="ci-4459-0-0-n-200d586c0a" Sep 16 04:53:45.344097 systemd[1]: Created slice kubepods-burstable-pod2e1f130ba3578863d605d922f3d1be86.slice - libcontainer container kubepods-burstable-pod2e1f130ba3578863d605d922f3d1be86.slice. Sep 16 04:53:45.349516 kubelet[2389]: E0916 04:53:45.348237 2389 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-0-0-n-200d586c0a\" not found" node="ci-4459-0-0-n-200d586c0a" Sep 16 04:53:45.361817 kubelet[2389]: I0916 04:53:45.361781 2389 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-0-0-n-200d586c0a" Sep 16 04:53:45.362237 kubelet[2389]: E0916 04:53:45.362164 2389 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://37.27.208.182:6443/api/v1/nodes\": dial tcp 37.27.208.182:6443: connect: connection refused" node="ci-4459-0-0-n-200d586c0a" Sep 16 04:53:45.388454 kubelet[2389]: I0916 04:53:45.388212 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2e1f130ba3578863d605d922f3d1be86-ca-certs\") pod \"kube-controller-manager-ci-4459-0-0-n-200d586c0a\" (UID: \"2e1f130ba3578863d605d922f3d1be86\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-n-200d586c0a" Sep 16 04:53:45.388454 kubelet[2389]: I0916 04:53:45.388263 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2e1f130ba3578863d605d922f3d1be86-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-0-0-n-200d586c0a\" (UID: \"2e1f130ba3578863d605d922f3d1be86\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-n-200d586c0a" Sep 16 04:53:45.388454 kubelet[2389]: E0916 04:53:45.388285 2389 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://37.27.208.182:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-0-0-n-200d586c0a?timeout=10s\": dial tcp 37.27.208.182:6443: connect: connection refused" interval="400ms" Sep 16 04:53:45.388454 kubelet[2389]: I0916 04:53:45.388365 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2e1f130ba3578863d605d922f3d1be86-k8s-certs\") pod \"kube-controller-manager-ci-4459-0-0-n-200d586c0a\" (UID: \"2e1f130ba3578863d605d922f3d1be86\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-n-200d586c0a" Sep 16 04:53:45.388454 kubelet[2389]: I0916 04:53:45.388400 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2e1f130ba3578863d605d922f3d1be86-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-0-0-n-200d586c0a\" (UID: \"2e1f130ba3578863d605d922f3d1be86\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-n-200d586c0a" Sep 16 04:53:45.388830 kubelet[2389]: I0916 04:53:45.388454 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ffe1107ca8b85de1052f392ffe6a0ff2-ca-certs\") pod \"kube-apiserver-ci-4459-0-0-n-200d586c0a\" (UID: \"ffe1107ca8b85de1052f392ffe6a0ff2\") " pod="kube-system/kube-apiserver-ci-4459-0-0-n-200d586c0a" Sep 16 04:53:45.388830 kubelet[2389]: I0916 04:53:45.388484 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ffe1107ca8b85de1052f392ffe6a0ff2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-0-0-n-200d586c0a\" (UID: \"ffe1107ca8b85de1052f392ffe6a0ff2\") " pod="kube-system/kube-apiserver-ci-4459-0-0-n-200d586c0a" Sep 16 04:53:45.388830 kubelet[2389]: I0916 04:53:45.388575 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/92a6f4bc4b2c5a471d9ff53f8f9e3bb8-kubeconfig\") pod \"kube-scheduler-ci-4459-0-0-n-200d586c0a\" (UID: \"92a6f4bc4b2c5a471d9ff53f8f9e3bb8\") " pod="kube-system/kube-scheduler-ci-4459-0-0-n-200d586c0a" Sep 16 04:53:45.388830 kubelet[2389]: I0916 04:53:45.388600 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ffe1107ca8b85de1052f392ffe6a0ff2-k8s-certs\") pod \"kube-apiserver-ci-4459-0-0-n-200d586c0a\" (UID: \"ffe1107ca8b85de1052f392ffe6a0ff2\") " pod="kube-system/kube-apiserver-ci-4459-0-0-n-200d586c0a" Sep 16 04:53:45.388830 kubelet[2389]: I0916 04:53:45.388618 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2e1f130ba3578863d605d922f3d1be86-kubeconfig\") pod \"kube-controller-manager-ci-4459-0-0-n-200d586c0a\" (UID: \"2e1f130ba3578863d605d922f3d1be86\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-n-200d586c0a" Sep 16 04:53:45.564537 kubelet[2389]: I0916 04:53:45.564486 2389 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-0-0-n-200d586c0a" Sep 16 04:53:45.564835 kubelet[2389]: E0916 04:53:45.564805 2389 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://37.27.208.182:6443/api/v1/nodes\": dial tcp 37.27.208.182:6443: connect: connection refused" node="ci-4459-0-0-n-200d586c0a" Sep 16 04:53:45.634749 containerd[1569]: time="2025-09-16T04:53:45.634707298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-0-0-n-200d586c0a,Uid:ffe1107ca8b85de1052f392ffe6a0ff2,Namespace:kube-system,Attempt:0,}" Sep 16 04:53:45.643442 containerd[1569]: time="2025-09-16T04:53:45.643381553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-0-0-n-200d586c0a,Uid:92a6f4bc4b2c5a471d9ff53f8f9e3bb8,Namespace:kube-system,Attempt:0,}" Sep 16 04:53:45.653829 containerd[1569]: time="2025-09-16T04:53:45.653762538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-0-0-n-200d586c0a,Uid:2e1f130ba3578863d605d922f3d1be86,Namespace:kube-system,Attempt:0,}" Sep 16 04:53:45.755964 containerd[1569]: time="2025-09-16T04:53:45.755917640Z" level=info msg="connecting to shim f7e0ebfd8777c74337991fade3c4b4df5425b67eb52ce336092d8fd96ca6bf07" address="unix:///run/containerd/s/dfc73e3ded834a5291891ea9e8cb476a1b15a4900ad9bded94cbac6b7ae8b53a" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:53:45.757518 containerd[1569]: time="2025-09-16T04:53:45.757473707Z" level=info msg="connecting to shim bab4c8ac4168b685f7001020b46e48c1738b83b0cf0b00e3c9cf965cb0b14f27" address="unix:///run/containerd/s/1bd0ea96905c8a09c2fb5c33a4c894ab22f0974199423cbf30f74d6ad3de7666" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:53:45.759062 containerd[1569]: time="2025-09-16T04:53:45.759028593Z" level=info msg="connecting to shim 0f06452a90396f4ba44e1d2eb4dabd9579113e397ec51dcba5de719f58206010" address="unix:///run/containerd/s/15b6e9caef074168719e7719beb7352336bf17dd3e947df9d949cf5acfc6950f" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:53:45.789719 kubelet[2389]: E0916 04:53:45.789000 2389 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://37.27.208.182:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-0-0-n-200d586c0a?timeout=10s\": dial tcp 37.27.208.182:6443: connect: connection refused" interval="800ms" Sep 16 04:53:45.841669 systemd[1]: Started cri-containerd-0f06452a90396f4ba44e1d2eb4dabd9579113e397ec51dcba5de719f58206010.scope - libcontainer container 0f06452a90396f4ba44e1d2eb4dabd9579113e397ec51dcba5de719f58206010. Sep 16 04:53:45.842639 systemd[1]: Started cri-containerd-bab4c8ac4168b685f7001020b46e48c1738b83b0cf0b00e3c9cf965cb0b14f27.scope - libcontainer container bab4c8ac4168b685f7001020b46e48c1738b83b0cf0b00e3c9cf965cb0b14f27. Sep 16 04:53:45.843562 systemd[1]: Started cri-containerd-f7e0ebfd8777c74337991fade3c4b4df5425b67eb52ce336092d8fd96ca6bf07.scope - libcontainer container f7e0ebfd8777c74337991fade3c4b4df5425b67eb52ce336092d8fd96ca6bf07. Sep 16 04:53:45.915171 containerd[1569]: time="2025-09-16T04:53:45.914802815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-0-0-n-200d586c0a,Uid:2e1f130ba3578863d605d922f3d1be86,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f06452a90396f4ba44e1d2eb4dabd9579113e397ec51dcba5de719f58206010\"" Sep 16 04:53:45.920662 containerd[1569]: time="2025-09-16T04:53:45.920571152Z" level=info msg="CreateContainer within sandbox \"0f06452a90396f4ba44e1d2eb4dabd9579113e397ec51dcba5de719f58206010\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 16 04:53:45.932215 containerd[1569]: time="2025-09-16T04:53:45.932031441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-0-0-n-200d586c0a,Uid:ffe1107ca8b85de1052f392ffe6a0ff2,Namespace:kube-system,Attempt:0,} returns sandbox id \"bab4c8ac4168b685f7001020b46e48c1738b83b0cf0b00e3c9cf965cb0b14f27\"" Sep 16 04:53:45.936453 containerd[1569]: time="2025-09-16T04:53:45.936416984Z" level=info msg="CreateContainer within sandbox \"bab4c8ac4168b685f7001020b46e48c1738b83b0cf0b00e3c9cf965cb0b14f27\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 16 04:53:45.940187 containerd[1569]: time="2025-09-16T04:53:45.939890087Z" level=info msg="Container a89ea48de28ab3046c314203d6c54789fddd4afbf682aabca86ba0ed1477210c: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:53:45.946116 containerd[1569]: time="2025-09-16T04:53:45.946081817Z" level=info msg="Container 187f60ef6129769a58490713c9ffd0fe426151756364db82a33c2adecad3523e: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:53:45.956591 containerd[1569]: time="2025-09-16T04:53:45.956553151Z" level=info msg="CreateContainer within sandbox \"0f06452a90396f4ba44e1d2eb4dabd9579113e397ec51dcba5de719f58206010\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a89ea48de28ab3046c314203d6c54789fddd4afbf682aabca86ba0ed1477210c\"" Sep 16 04:53:45.957436 containerd[1569]: time="2025-09-16T04:53:45.957399338Z" level=info msg="CreateContainer within sandbox \"bab4c8ac4168b685f7001020b46e48c1738b83b0cf0b00e3c9cf965cb0b14f27\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"187f60ef6129769a58490713c9ffd0fe426151756364db82a33c2adecad3523e\"" Sep 16 04:53:45.957486 containerd[1569]: time="2025-09-16T04:53:45.957460262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-0-0-n-200d586c0a,Uid:92a6f4bc4b2c5a471d9ff53f8f9e3bb8,Namespace:kube-system,Attempt:0,} returns sandbox id \"f7e0ebfd8777c74337991fade3c4b4df5425b67eb52ce336092d8fd96ca6bf07\"" Sep 16 04:53:45.960891 containerd[1569]: time="2025-09-16T04:53:45.960672115Z" level=info msg="StartContainer for \"a89ea48de28ab3046c314203d6c54789fddd4afbf682aabca86ba0ed1477210c\"" Sep 16 04:53:45.961633 containerd[1569]: time="2025-09-16T04:53:45.961601908Z" level=info msg="connecting to shim a89ea48de28ab3046c314203d6c54789fddd4afbf682aabca86ba0ed1477210c" address="unix:///run/containerd/s/15b6e9caef074168719e7719beb7352336bf17dd3e947df9d949cf5acfc6950f" protocol=ttrpc version=3 Sep 16 04:53:45.962135 containerd[1569]: time="2025-09-16T04:53:45.962092639Z" level=info msg="StartContainer for \"187f60ef6129769a58490713c9ffd0fe426151756364db82a33c2adecad3523e\"" Sep 16 04:53:45.963017 containerd[1569]: time="2025-09-16T04:53:45.962924088Z" level=info msg="connecting to shim 187f60ef6129769a58490713c9ffd0fe426151756364db82a33c2adecad3523e" address="unix:///run/containerd/s/1bd0ea96905c8a09c2fb5c33a4c894ab22f0974199423cbf30f74d6ad3de7666" protocol=ttrpc version=3 Sep 16 04:53:45.965728 containerd[1569]: time="2025-09-16T04:53:45.965701736Z" level=info msg="CreateContainer within sandbox \"f7e0ebfd8777c74337991fade3c4b4df5425b67eb52ce336092d8fd96ca6bf07\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 16 04:53:45.969014 kubelet[2389]: I0916 04:53:45.968967 2389 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-0-0-n-200d586c0a" Sep 16 04:53:45.969442 kubelet[2389]: E0916 04:53:45.969303 2389 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://37.27.208.182:6443/api/v1/nodes\": dial tcp 37.27.208.182:6443: connect: connection refused" node="ci-4459-0-0-n-200d586c0a" Sep 16 04:53:45.973833 kubelet[2389]: W0916 04:53:45.973783 2389 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://37.27.208.182:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 37.27.208.182:6443: connect: connection refused Sep 16 04:53:45.973888 kubelet[2389]: E0916 04:53:45.973856 2389 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://37.27.208.182:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 37.27.208.182:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:53:45.975412 containerd[1569]: time="2025-09-16T04:53:45.975375966Z" level=info msg="Container d81a57ebd46e06b76589b5a44a3e1be32b8b61bac8f605cdac4eda72ba3eba3e: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:53:45.983744 systemd[1]: Started cri-containerd-a89ea48de28ab3046c314203d6c54789fddd4afbf682aabca86ba0ed1477210c.scope - libcontainer container a89ea48de28ab3046c314203d6c54789fddd4afbf682aabca86ba0ed1477210c. Sep 16 04:53:45.987442 containerd[1569]: time="2025-09-16T04:53:45.987402497Z" level=info msg="CreateContainer within sandbox \"f7e0ebfd8777c74337991fade3c4b4df5425b67eb52ce336092d8fd96ca6bf07\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d81a57ebd46e06b76589b5a44a3e1be32b8b61bac8f605cdac4eda72ba3eba3e\"" Sep 16 04:53:45.988213 containerd[1569]: time="2025-09-16T04:53:45.988174775Z" level=info msg="StartContainer for \"d81a57ebd46e06b76589b5a44a3e1be32b8b61bac8f605cdac4eda72ba3eba3e\"" Sep 16 04:53:45.991417 containerd[1569]: time="2025-09-16T04:53:45.991330252Z" level=info msg="connecting to shim d81a57ebd46e06b76589b5a44a3e1be32b8b61bac8f605cdac4eda72ba3eba3e" address="unix:///run/containerd/s/dfc73e3ded834a5291891ea9e8cb476a1b15a4900ad9bded94cbac6b7ae8b53a" protocol=ttrpc version=3 Sep 16 04:53:45.993110 systemd[1]: Started cri-containerd-187f60ef6129769a58490713c9ffd0fe426151756364db82a33c2adecad3523e.scope - libcontainer container 187f60ef6129769a58490713c9ffd0fe426151756364db82a33c2adecad3523e. Sep 16 04:53:46.016704 systemd[1]: Started cri-containerd-d81a57ebd46e06b76589b5a44a3e1be32b8b61bac8f605cdac4eda72ba3eba3e.scope - libcontainer container d81a57ebd46e06b76589b5a44a3e1be32b8b61bac8f605cdac4eda72ba3eba3e. Sep 16 04:53:46.051393 containerd[1569]: time="2025-09-16T04:53:46.051341439Z" level=info msg="StartContainer for \"a89ea48de28ab3046c314203d6c54789fddd4afbf682aabca86ba0ed1477210c\" returns successfully" Sep 16 04:53:46.078219 containerd[1569]: time="2025-09-16T04:53:46.077986562Z" level=info msg="StartContainer for \"187f60ef6129769a58490713c9ffd0fe426151756364db82a33c2adecad3523e\" returns successfully" Sep 16 04:53:46.106433 containerd[1569]: time="2025-09-16T04:53:46.106328365Z" level=info msg="StartContainer for \"d81a57ebd46e06b76589b5a44a3e1be32b8b61bac8f605cdac4eda72ba3eba3e\" returns successfully" Sep 16 04:53:46.219912 kubelet[2389]: E0916 04:53:46.219834 2389 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-0-0-n-200d586c0a\" not found" node="ci-4459-0-0-n-200d586c0a" Sep 16 04:53:46.222712 kubelet[2389]: E0916 04:53:46.222694 2389 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-0-0-n-200d586c0a\" not found" node="ci-4459-0-0-n-200d586c0a" Sep 16 04:53:46.223699 kubelet[2389]: E0916 04:53:46.223411 2389 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-0-0-n-200d586c0a\" not found" node="ci-4459-0-0-n-200d586c0a" Sep 16 04:53:46.240186 kubelet[2389]: W0916 04:53:46.240091 2389 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://37.27.208.182:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-0-0-n-200d586c0a&limit=500&resourceVersion=0": dial tcp 37.27.208.182:6443: connect: connection refused Sep 16 04:53:46.240186 kubelet[2389]: E0916 04:53:46.240163 2389 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://37.27.208.182:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-0-0-n-200d586c0a&limit=500&resourceVersion=0\": dial tcp 37.27.208.182:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:53:46.772586 kubelet[2389]: I0916 04:53:46.772239 2389 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-0-0-n-200d586c0a" Sep 16 04:53:47.227733 kubelet[2389]: E0916 04:53:47.226877 2389 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-0-0-n-200d586c0a\" not found" node="ci-4459-0-0-n-200d586c0a" Sep 16 04:53:47.228979 kubelet[2389]: E0916 04:53:47.228848 2389 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-0-0-n-200d586c0a\" not found" node="ci-4459-0-0-n-200d586c0a" Sep 16 04:53:47.506721 kubelet[2389]: E0916 04:53:47.506608 2389 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459-0-0-n-200d586c0a\" not found" node="ci-4459-0-0-n-200d586c0a" Sep 16 04:53:47.545896 kubelet[2389]: I0916 04:53:47.545732 2389 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-0-0-n-200d586c0a" Sep 16 04:53:47.545896 kubelet[2389]: E0916 04:53:47.545767 2389 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4459-0-0-n-200d586c0a\": node \"ci-4459-0-0-n-200d586c0a\" not found" Sep 16 04:53:47.559087 kubelet[2389]: E0916 04:53:47.559054 2389 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-0-0-n-200d586c0a\" not found" Sep 16 04:53:47.659778 kubelet[2389]: E0916 04:53:47.659698 2389 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-0-0-n-200d586c0a\" not found" Sep 16 04:53:47.760800 kubelet[2389]: E0916 04:53:47.760684 2389 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-0-0-n-200d586c0a\" not found" Sep 16 04:53:47.861645 kubelet[2389]: E0916 04:53:47.861590 2389 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-0-0-n-200d586c0a\" not found" Sep 16 04:53:47.962561 kubelet[2389]: E0916 04:53:47.962484 2389 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-0-0-n-200d586c0a\" not found" Sep 16 04:53:48.063773 kubelet[2389]: E0916 04:53:48.063656 2389 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-0-0-n-200d586c0a\" not found" Sep 16 04:53:48.164141 kubelet[2389]: E0916 04:53:48.164087 2389 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-0-0-n-200d586c0a\" not found" Sep 16 04:53:48.265079 kubelet[2389]: E0916 04:53:48.265027 2389 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-0-0-n-200d586c0a\" not found" Sep 16 04:53:48.366027 kubelet[2389]: E0916 04:53:48.365984 2389 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-0-0-n-200d586c0a\" not found" Sep 16 04:53:48.466277 kubelet[2389]: E0916 04:53:48.466231 2389 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-0-0-n-200d586c0a\" not found" Sep 16 04:53:48.585168 kubelet[2389]: I0916 04:53:48.585115 2389 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-0-0-n-200d586c0a" Sep 16 04:53:48.599540 kubelet[2389]: I0916 04:53:48.599482 2389 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-0-0-n-200d586c0a" Sep 16 04:53:48.606457 kubelet[2389]: I0916 04:53:48.606405 2389 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-0-0-n-200d586c0a" Sep 16 04:53:49.153612 kubelet[2389]: I0916 04:53:49.153487 2389 apiserver.go:52] "Watching apiserver" Sep 16 04:53:49.188038 kubelet[2389]: I0916 04:53:49.187986 2389 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 16 04:53:49.719519 systemd[1]: Reload requested from client PID 2654 ('systemctl') (unit session-7.scope)... Sep 16 04:53:49.719882 systemd[1]: Reloading... Sep 16 04:53:49.828591 zram_generator::config[2697]: No configuration found. Sep 16 04:53:50.029140 systemd[1]: Reloading finished in 308 ms. Sep 16 04:53:50.056914 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:53:50.060024 systemd[1]: kubelet.service: Deactivated successfully. Sep 16 04:53:50.060476 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:53:50.060647 systemd[1]: kubelet.service: Consumed 719ms CPU time, 128.6M memory peak. Sep 16 04:53:50.063404 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:53:50.183057 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:53:50.193058 (kubelet)[2749]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 16 04:53:50.247765 kubelet[2749]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:53:50.247765 kubelet[2749]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 16 04:53:50.247765 kubelet[2749]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:53:50.248115 kubelet[2749]: I0916 04:53:50.247843 2749 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 16 04:53:50.256102 kubelet[2749]: I0916 04:53:50.256069 2749 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 16 04:53:50.256102 kubelet[2749]: I0916 04:53:50.256092 2749 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 16 04:53:50.256375 kubelet[2749]: I0916 04:53:50.256347 2749 server.go:954] "Client rotation is on, will bootstrap in background" Sep 16 04:53:50.257850 kubelet[2749]: I0916 04:53:50.257824 2749 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 16 04:53:50.263302 kubelet[2749]: I0916 04:53:50.263087 2749 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 16 04:53:50.267386 kubelet[2749]: I0916 04:53:50.267359 2749 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 16 04:53:50.272306 kubelet[2749]: I0916 04:53:50.271592 2749 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 16 04:53:50.272306 kubelet[2749]: I0916 04:53:50.271889 2749 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 16 04:53:50.272306 kubelet[2749]: I0916 04:53:50.271917 2749 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-0-0-n-200d586c0a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 16 04:53:50.272306 kubelet[2749]: I0916 04:53:50.272140 2749 topology_manager.go:138] "Creating topology manager with none policy" Sep 16 04:53:50.272556 kubelet[2749]: I0916 04:53:50.272151 2749 container_manager_linux.go:304] "Creating device plugin manager" Sep 16 04:53:50.272610 kubelet[2749]: I0916 04:53:50.272597 2749 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:53:50.272808 kubelet[2749]: I0916 04:53:50.272794 2749 kubelet.go:446] "Attempting to sync node with API server" Sep 16 04:53:50.272889 kubelet[2749]: I0916 04:53:50.272878 2749 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 16 04:53:50.272955 kubelet[2749]: I0916 04:53:50.272946 2749 kubelet.go:352] "Adding apiserver pod source" Sep 16 04:53:50.273010 kubelet[2749]: I0916 04:53:50.273003 2749 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 16 04:53:50.275840 kubelet[2749]: I0916 04:53:50.275827 2749 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 16 04:53:50.276206 kubelet[2749]: I0916 04:53:50.276191 2749 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 16 04:53:50.278710 kubelet[2749]: I0916 04:53:50.278698 2749 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 16 04:53:50.278807 kubelet[2749]: I0916 04:53:50.278799 2749 server.go:1287] "Started kubelet" Sep 16 04:53:50.279931 kubelet[2749]: I0916 04:53:50.279859 2749 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 16 04:53:50.280952 kubelet[2749]: I0916 04:53:50.280908 2749 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 16 04:53:50.281131 kubelet[2749]: I0916 04:53:50.281112 2749 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 16 04:53:50.282140 kubelet[2749]: I0916 04:53:50.282119 2749 server.go:479] "Adding debug handlers to kubelet server" Sep 16 04:53:50.287725 kubelet[2749]: I0916 04:53:50.287029 2749 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 16 04:53:50.291318 kubelet[2749]: E0916 04:53:50.290538 2749 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 16 04:53:50.291770 kubelet[2749]: I0916 04:53:50.291744 2749 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 16 04:53:50.292246 kubelet[2749]: I0916 04:53:50.292231 2749 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 16 04:53:50.294863 kubelet[2749]: I0916 04:53:50.294841 2749 factory.go:221] Registration of the systemd container factory successfully Sep 16 04:53:50.294948 kubelet[2749]: I0916 04:53:50.294924 2749 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 16 04:53:50.297912 kubelet[2749]: I0916 04:53:50.297888 2749 factory.go:221] Registration of the containerd container factory successfully Sep 16 04:53:50.298372 kubelet[2749]: I0916 04:53:50.298358 2749 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 16 04:53:50.298586 kubelet[2749]: I0916 04:53:50.298571 2749 reconciler.go:26] "Reconciler: start to sync state" Sep 16 04:53:50.301284 kubelet[2749]: I0916 04:53:50.301257 2749 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 16 04:53:50.302350 kubelet[2749]: I0916 04:53:50.302337 2749 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 16 04:53:50.302418 kubelet[2749]: I0916 04:53:50.302411 2749 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 16 04:53:50.302472 kubelet[2749]: I0916 04:53:50.302464 2749 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 16 04:53:50.302548 kubelet[2749]: I0916 04:53:50.302540 2749 kubelet.go:2382] "Starting kubelet main sync loop" Sep 16 04:53:50.302628 kubelet[2749]: E0916 04:53:50.302612 2749 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 16 04:53:50.342253 kubelet[2749]: I0916 04:53:50.342232 2749 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 16 04:53:50.342433 kubelet[2749]: I0916 04:53:50.342415 2749 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 16 04:53:50.342543 kubelet[2749]: I0916 04:53:50.342534 2749 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:53:50.342724 kubelet[2749]: I0916 04:53:50.342712 2749 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 16 04:53:50.342781 kubelet[2749]: I0916 04:53:50.342762 2749 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 16 04:53:50.342830 kubelet[2749]: I0916 04:53:50.342824 2749 policy_none.go:49] "None policy: Start" Sep 16 04:53:50.342872 kubelet[2749]: I0916 04:53:50.342866 2749 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 16 04:53:50.342911 kubelet[2749]: I0916 04:53:50.342905 2749 state_mem.go:35] "Initializing new in-memory state store" Sep 16 04:53:50.343027 kubelet[2749]: I0916 04:53:50.343019 2749 state_mem.go:75] "Updated machine memory state" Sep 16 04:53:50.349930 kubelet[2749]: I0916 04:53:50.349908 2749 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 16 04:53:50.350328 kubelet[2749]: I0916 04:53:50.350232 2749 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 16 04:53:50.350328 kubelet[2749]: I0916 04:53:50.350244 2749 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 16 04:53:50.350840 kubelet[2749]: I0916 04:53:50.350543 2749 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 16 04:53:50.352801 kubelet[2749]: E0916 04:53:50.352750 2749 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 16 04:53:50.403266 kubelet[2749]: I0916 04:53:50.403229 2749 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-0-0-n-200d586c0a" Sep 16 04:53:50.403853 kubelet[2749]: I0916 04:53:50.403552 2749 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-0-0-n-200d586c0a" Sep 16 04:53:50.403853 kubelet[2749]: I0916 04:53:50.403126 2749 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-0-0-n-200d586c0a" Sep 16 04:53:50.409088 kubelet[2749]: E0916 04:53:50.409049 2749 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-0-0-n-200d586c0a\" already exists" pod="kube-system/kube-apiserver-ci-4459-0-0-n-200d586c0a" Sep 16 04:53:50.409775 kubelet[2749]: E0916 04:53:50.409754 2749 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-0-0-n-200d586c0a\" already exists" pod="kube-system/kube-scheduler-ci-4459-0-0-n-200d586c0a" Sep 16 04:53:50.409847 kubelet[2749]: E0916 04:53:50.409800 2749 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-0-0-n-200d586c0a\" already exists" pod="kube-system/kube-controller-manager-ci-4459-0-0-n-200d586c0a" Sep 16 04:53:50.458708 kubelet[2749]: I0916 04:53:50.458658 2749 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-0-0-n-200d586c0a" Sep 16 04:53:50.467054 kubelet[2749]: I0916 04:53:50.467025 2749 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459-0-0-n-200d586c0a" Sep 16 04:53:50.467150 kubelet[2749]: I0916 04:53:50.467089 2749 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-0-0-n-200d586c0a" Sep 16 04:53:50.500783 kubelet[2749]: I0916 04:53:50.500734 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ffe1107ca8b85de1052f392ffe6a0ff2-k8s-certs\") pod \"kube-apiserver-ci-4459-0-0-n-200d586c0a\" (UID: \"ffe1107ca8b85de1052f392ffe6a0ff2\") " pod="kube-system/kube-apiserver-ci-4459-0-0-n-200d586c0a" Sep 16 04:53:50.500783 kubelet[2749]: I0916 04:53:50.500803 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2e1f130ba3578863d605d922f3d1be86-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-0-0-n-200d586c0a\" (UID: \"2e1f130ba3578863d605d922f3d1be86\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-n-200d586c0a" Sep 16 04:53:50.501112 kubelet[2749]: I0916 04:53:50.500834 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2e1f130ba3578863d605d922f3d1be86-k8s-certs\") pod \"kube-controller-manager-ci-4459-0-0-n-200d586c0a\" (UID: \"2e1f130ba3578863d605d922f3d1be86\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-n-200d586c0a" Sep 16 04:53:50.501112 kubelet[2749]: I0916 04:53:50.500857 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2e1f130ba3578863d605d922f3d1be86-kubeconfig\") pod \"kube-controller-manager-ci-4459-0-0-n-200d586c0a\" (UID: \"2e1f130ba3578863d605d922f3d1be86\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-n-200d586c0a" Sep 16 04:53:50.501112 kubelet[2749]: I0916 04:53:50.500880 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2e1f130ba3578863d605d922f3d1be86-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-0-0-n-200d586c0a\" (UID: \"2e1f130ba3578863d605d922f3d1be86\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-n-200d586c0a" Sep 16 04:53:50.501112 kubelet[2749]: I0916 04:53:50.500898 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/92a6f4bc4b2c5a471d9ff53f8f9e3bb8-kubeconfig\") pod \"kube-scheduler-ci-4459-0-0-n-200d586c0a\" (UID: \"92a6f4bc4b2c5a471d9ff53f8f9e3bb8\") " pod="kube-system/kube-scheduler-ci-4459-0-0-n-200d586c0a" Sep 16 04:53:50.501112 kubelet[2749]: I0916 04:53:50.500919 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ffe1107ca8b85de1052f392ffe6a0ff2-ca-certs\") pod \"kube-apiserver-ci-4459-0-0-n-200d586c0a\" (UID: \"ffe1107ca8b85de1052f392ffe6a0ff2\") " pod="kube-system/kube-apiserver-ci-4459-0-0-n-200d586c0a" Sep 16 04:53:50.501245 kubelet[2749]: I0916 04:53:50.500937 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ffe1107ca8b85de1052f392ffe6a0ff2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-0-0-n-200d586c0a\" (UID: \"ffe1107ca8b85de1052f392ffe6a0ff2\") " pod="kube-system/kube-apiserver-ci-4459-0-0-n-200d586c0a" Sep 16 04:53:50.501368 kubelet[2749]: I0916 04:53:50.501328 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2e1f130ba3578863d605d922f3d1be86-ca-certs\") pod \"kube-controller-manager-ci-4459-0-0-n-200d586c0a\" (UID: \"2e1f130ba3578863d605d922f3d1be86\") " pod="kube-system/kube-controller-manager-ci-4459-0-0-n-200d586c0a" Sep 16 04:53:51.274259 kubelet[2749]: I0916 04:53:51.274193 2749 apiserver.go:52] "Watching apiserver" Sep 16 04:53:51.299313 kubelet[2749]: I0916 04:53:51.298606 2749 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 16 04:53:51.329151 kubelet[2749]: I0916 04:53:51.329124 2749 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-0-0-n-200d586c0a" Sep 16 04:53:51.329954 kubelet[2749]: I0916 04:53:51.329940 2749 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-0-0-n-200d586c0a" Sep 16 04:53:51.338697 kubelet[2749]: E0916 04:53:51.338640 2749 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-0-0-n-200d586c0a\" already exists" pod="kube-system/kube-scheduler-ci-4459-0-0-n-200d586c0a" Sep 16 04:53:51.339266 kubelet[2749]: E0916 04:53:51.339013 2749 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-0-0-n-200d586c0a\" already exists" pod="kube-system/kube-apiserver-ci-4459-0-0-n-200d586c0a" Sep 16 04:53:51.364818 kubelet[2749]: I0916 04:53:51.364732 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459-0-0-n-200d586c0a" podStartSLOduration=3.364716128 podStartE2EDuration="3.364716128s" podCreationTimestamp="2025-09-16 04:53:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:53:51.356581735 +0000 UTC m=+1.157632951" watchObservedRunningTime="2025-09-16 04:53:51.364716128 +0000 UTC m=+1.165767334" Sep 16 04:53:51.365004 kubelet[2749]: I0916 04:53:51.364838 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459-0-0-n-200d586c0a" podStartSLOduration=3.3648341090000002 podStartE2EDuration="3.364834109s" podCreationTimestamp="2025-09-16 04:53:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:53:51.363584145 +0000 UTC m=+1.164635351" watchObservedRunningTime="2025-09-16 04:53:51.364834109 +0000 UTC m=+1.165885315" Sep 16 04:53:56.157763 kubelet[2749]: I0916 04:53:56.157736 2749 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 16 04:53:56.158488 containerd[1569]: time="2025-09-16T04:53:56.158020802Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 16 04:53:56.159091 kubelet[2749]: I0916 04:53:56.158788 2749 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 16 04:53:56.794470 kubelet[2749]: I0916 04:53:56.794409 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459-0-0-n-200d586c0a" podStartSLOduration=8.794390881 podStartE2EDuration="8.794390881s" podCreationTimestamp="2025-09-16 04:53:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:53:51.373525356 +0000 UTC m=+1.174576562" watchObservedRunningTime="2025-09-16 04:53:56.794390881 +0000 UTC m=+6.595442087" Sep 16 04:53:56.805379 systemd[1]: Created slice kubepods-besteffort-pod870cb8c1_dd8e_4717_b2ec_f477c2ef36fe.slice - libcontainer container kubepods-besteffort-pod870cb8c1_dd8e_4717_b2ec_f477c2ef36fe.slice. Sep 16 04:53:56.844514 kubelet[2749]: I0916 04:53:56.844369 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/870cb8c1-dd8e-4717-b2ec-f477c2ef36fe-kube-proxy\") pod \"kube-proxy-hrw4p\" (UID: \"870cb8c1-dd8e-4717-b2ec-f477c2ef36fe\") " pod="kube-system/kube-proxy-hrw4p" Sep 16 04:53:56.844514 kubelet[2749]: I0916 04:53:56.844421 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/870cb8c1-dd8e-4717-b2ec-f477c2ef36fe-xtables-lock\") pod \"kube-proxy-hrw4p\" (UID: \"870cb8c1-dd8e-4717-b2ec-f477c2ef36fe\") " pod="kube-system/kube-proxy-hrw4p" Sep 16 04:53:56.844514 kubelet[2749]: I0916 04:53:56.844444 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/870cb8c1-dd8e-4717-b2ec-f477c2ef36fe-lib-modules\") pod \"kube-proxy-hrw4p\" (UID: \"870cb8c1-dd8e-4717-b2ec-f477c2ef36fe\") " pod="kube-system/kube-proxy-hrw4p" Sep 16 04:53:56.844514 kubelet[2749]: I0916 04:53:56.844464 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwhhr\" (UniqueName: \"kubernetes.io/projected/870cb8c1-dd8e-4717-b2ec-f477c2ef36fe-kube-api-access-bwhhr\") pod \"kube-proxy-hrw4p\" (UID: \"870cb8c1-dd8e-4717-b2ec-f477c2ef36fe\") " pod="kube-system/kube-proxy-hrw4p" Sep 16 04:53:57.115526 containerd[1569]: time="2025-09-16T04:53:57.115437085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hrw4p,Uid:870cb8c1-dd8e-4717-b2ec-f477c2ef36fe,Namespace:kube-system,Attempt:0,}" Sep 16 04:53:57.131333 containerd[1569]: time="2025-09-16T04:53:57.131295552Z" level=info msg="connecting to shim 826a96b6957bf31fce49ba7b28fd02508101b374989bf81902e9d95d5b961cc8" address="unix:///run/containerd/s/0ad0a77220e2d50a78c240f0931edc8a376ae6784fb300681ab845e90050120b" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:53:57.154610 systemd[1]: Started cri-containerd-826a96b6957bf31fce49ba7b28fd02508101b374989bf81902e9d95d5b961cc8.scope - libcontainer container 826a96b6957bf31fce49ba7b28fd02508101b374989bf81902e9d95d5b961cc8. Sep 16 04:53:57.178520 containerd[1569]: time="2025-09-16T04:53:57.178212005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hrw4p,Uid:870cb8c1-dd8e-4717-b2ec-f477c2ef36fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"826a96b6957bf31fce49ba7b28fd02508101b374989bf81902e9d95d5b961cc8\"" Sep 16 04:53:57.183017 containerd[1569]: time="2025-09-16T04:53:57.182926855Z" level=info msg="CreateContainer within sandbox \"826a96b6957bf31fce49ba7b28fd02508101b374989bf81902e9d95d5b961cc8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 16 04:53:57.207279 containerd[1569]: time="2025-09-16T04:53:57.203655053Z" level=info msg="Container 1d7622c8bda806e16927237de92a83185fb12f7e01982e61e5c2e2cd912e254c: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:53:57.206065 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1259894254.mount: Deactivated successfully. Sep 16 04:53:57.215919 containerd[1569]: time="2025-09-16T04:53:57.215523437Z" level=info msg="CreateContainer within sandbox \"826a96b6957bf31fce49ba7b28fd02508101b374989bf81902e9d95d5b961cc8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1d7622c8bda806e16927237de92a83185fb12f7e01982e61e5c2e2cd912e254c\"" Sep 16 04:53:57.217305 containerd[1569]: time="2025-09-16T04:53:57.217286273Z" level=info msg="StartContainer for \"1d7622c8bda806e16927237de92a83185fb12f7e01982e61e5c2e2cd912e254c\"" Sep 16 04:53:57.219230 containerd[1569]: time="2025-09-16T04:53:57.218962876Z" level=info msg="connecting to shim 1d7622c8bda806e16927237de92a83185fb12f7e01982e61e5c2e2cd912e254c" address="unix:///run/containerd/s/0ad0a77220e2d50a78c240f0931edc8a376ae6784fb300681ab845e90050120b" protocol=ttrpc version=3 Sep 16 04:53:57.246520 kubelet[2749]: I0916 04:53:57.246475 2749 status_manager.go:890] "Failed to get status for pod" podUID="c0daae30-e8b8-4bd3-bd39-e7876e0b22f1" pod="tigera-operator/tigera-operator-755d956888-hvb56" err="pods \"tigera-operator-755d956888-hvb56\" is forbidden: User \"system:node:ci-4459-0-0-n-200d586c0a\" cannot get resource \"pods\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-4459-0-0-n-200d586c0a' and this object" Sep 16 04:53:57.246909 kubelet[2749]: W0916 04:53:57.246896 2749 reflector.go:569] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ci-4459-0-0-n-200d586c0a" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4459-0-0-n-200d586c0a' and this object Sep 16 04:53:57.246997 kubelet[2749]: E0916 04:53:57.246978 2749 reflector.go:166] "Unhandled Error" err="object-\"tigera-operator\"/\"kubernetes-services-endpoint\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kubernetes-services-endpoint\" is forbidden: User \"system:node:ci-4459-0-0-n-200d586c0a\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-4459-0-0-n-200d586c0a' and this object" logger="UnhandledError" Sep 16 04:53:57.247770 systemd[1]: Started cri-containerd-1d7622c8bda806e16927237de92a83185fb12f7e01982e61e5c2e2cd912e254c.scope - libcontainer container 1d7622c8bda806e16927237de92a83185fb12f7e01982e61e5c2e2cd912e254c. Sep 16 04:53:57.249717 kubelet[2749]: I0916 04:53:57.248673 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c0daae30-e8b8-4bd3-bd39-e7876e0b22f1-var-lib-calico\") pod \"tigera-operator-755d956888-hvb56\" (UID: \"c0daae30-e8b8-4bd3-bd39-e7876e0b22f1\") " pod="tigera-operator/tigera-operator-755d956888-hvb56" Sep 16 04:53:57.250101 kubelet[2749]: I0916 04:53:57.249886 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlklp\" (UniqueName: \"kubernetes.io/projected/c0daae30-e8b8-4bd3-bd39-e7876e0b22f1-kube-api-access-rlklp\") pod \"tigera-operator-755d956888-hvb56\" (UID: \"c0daae30-e8b8-4bd3-bd39-e7876e0b22f1\") " pod="tigera-operator/tigera-operator-755d956888-hvb56" Sep 16 04:53:57.252783 systemd[1]: Created slice kubepods-besteffort-podc0daae30_e8b8_4bd3_bd39_e7876e0b22f1.slice - libcontainer container kubepods-besteffort-podc0daae30_e8b8_4bd3_bd39_e7876e0b22f1.slice. Sep 16 04:53:57.282130 containerd[1569]: time="2025-09-16T04:53:57.282103582Z" level=info msg="StartContainer for \"1d7622c8bda806e16927237de92a83185fb12f7e01982e61e5c2e2cd912e254c\" returns successfully" Sep 16 04:53:57.556192 containerd[1569]: time="2025-09-16T04:53:57.555733656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-hvb56,Uid:c0daae30-e8b8-4bd3-bd39-e7876e0b22f1,Namespace:tigera-operator,Attempt:0,}" Sep 16 04:53:57.571282 containerd[1569]: time="2025-09-16T04:53:57.571151746Z" level=info msg="connecting to shim 26aaf037d7ecd27f1a588a3da6ffcaddedb8932867721cee98c76f2a69f21d68" address="unix:///run/containerd/s/b2468b7381b3493436fcad8eb0bf0c8cd6472fb9d784c4958e2b746fbb535dc3" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:53:57.597661 systemd[1]: Started cri-containerd-26aaf037d7ecd27f1a588a3da6ffcaddedb8932867721cee98c76f2a69f21d68.scope - libcontainer container 26aaf037d7ecd27f1a588a3da6ffcaddedb8932867721cee98c76f2a69f21d68. Sep 16 04:53:57.652192 containerd[1569]: time="2025-09-16T04:53:57.652151589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-hvb56,Uid:c0daae30-e8b8-4bd3-bd39-e7876e0b22f1,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"26aaf037d7ecd27f1a588a3da6ffcaddedb8932867721cee98c76f2a69f21d68\"" Sep 16 04:53:57.654944 containerd[1569]: time="2025-09-16T04:53:57.654928035Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 16 04:53:59.421871 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1123662025.mount: Deactivated successfully. Sep 16 04:53:59.509759 kubelet[2749]: I0916 04:53:59.509706 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hrw4p" podStartSLOduration=3.509301535 podStartE2EDuration="3.509301535s" podCreationTimestamp="2025-09-16 04:53:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:53:57.363097303 +0000 UTC m=+7.164148509" watchObservedRunningTime="2025-09-16 04:53:59.509301535 +0000 UTC m=+9.310352741" Sep 16 04:53:59.803455 containerd[1569]: time="2025-09-16T04:53:59.803244748Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:53:59.804292 containerd[1569]: time="2025-09-16T04:53:59.804247409Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 16 04:53:59.805455 containerd[1569]: time="2025-09-16T04:53:59.805411250Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:53:59.807572 containerd[1569]: time="2025-09-16T04:53:59.807489457Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:53:59.808311 containerd[1569]: time="2025-09-16T04:53:59.807887283Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.152934962s" Sep 16 04:53:59.808311 containerd[1569]: time="2025-09-16T04:53:59.807914835Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 16 04:53:59.810709 containerd[1569]: time="2025-09-16T04:53:59.810683176Z" level=info msg="CreateContainer within sandbox \"26aaf037d7ecd27f1a588a3da6ffcaddedb8932867721cee98c76f2a69f21d68\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 16 04:53:59.816840 containerd[1569]: time="2025-09-16T04:53:59.816816467Z" level=info msg="Container ab44f071edfb3107b9b827d9b1c0a3c0e2a957e108a7ba208de4bfe6bd260273: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:53:59.826229 containerd[1569]: time="2025-09-16T04:53:59.826179664Z" level=info msg="CreateContainer within sandbox \"26aaf037d7ecd27f1a588a3da6ffcaddedb8932867721cee98c76f2a69f21d68\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ab44f071edfb3107b9b827d9b1c0a3c0e2a957e108a7ba208de4bfe6bd260273\"" Sep 16 04:53:59.826968 containerd[1569]: time="2025-09-16T04:53:59.826945800Z" level=info msg="StartContainer for \"ab44f071edfb3107b9b827d9b1c0a3c0e2a957e108a7ba208de4bfe6bd260273\"" Sep 16 04:53:59.827977 containerd[1569]: time="2025-09-16T04:53:59.827949572Z" level=info msg="connecting to shim ab44f071edfb3107b9b827d9b1c0a3c0e2a957e108a7ba208de4bfe6bd260273" address="unix:///run/containerd/s/b2468b7381b3493436fcad8eb0bf0c8cd6472fb9d784c4958e2b746fbb535dc3" protocol=ttrpc version=3 Sep 16 04:53:59.846729 systemd[1]: Started cri-containerd-ab44f071edfb3107b9b827d9b1c0a3c0e2a957e108a7ba208de4bfe6bd260273.scope - libcontainer container ab44f071edfb3107b9b827d9b1c0a3c0e2a957e108a7ba208de4bfe6bd260273. Sep 16 04:53:59.872295 containerd[1569]: time="2025-09-16T04:53:59.872253797Z" level=info msg="StartContainer for \"ab44f071edfb3107b9b827d9b1c0a3c0e2a957e108a7ba208de4bfe6bd260273\" returns successfully" Sep 16 04:54:00.401738 kubelet[2749]: I0916 04:54:00.400032 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-hvb56" podStartSLOduration=1.245683312 podStartE2EDuration="3.400010424s" podCreationTimestamp="2025-09-16 04:53:57 +0000 UTC" firstStartedPulling="2025-09-16 04:53:57.654426084 +0000 UTC m=+7.455477291" lastFinishedPulling="2025-09-16 04:53:59.808753197 +0000 UTC m=+9.609804403" observedRunningTime="2025-09-16 04:54:00.377133188 +0000 UTC m=+10.178184393" watchObservedRunningTime="2025-09-16 04:54:00.400010424 +0000 UTC m=+10.201061640" Sep 16 04:54:01.150050 update_engine[1537]: I20250916 04:54:01.149533 1537 update_attempter.cc:509] Updating boot flags... Sep 16 04:54:06.037664 sudo[1834]: pam_unix(sudo:session): session closed for user root Sep 16 04:54:06.213351 sshd[1833]: Connection closed by 139.178.89.65 port 41622 Sep 16 04:54:06.215707 sshd-session[1815]: pam_unix(sshd:session): session closed for user core Sep 16 04:54:06.220337 systemd[1]: sshd@6-37.27.208.182:22-139.178.89.65:41622.service: Deactivated successfully. Sep 16 04:54:06.228017 systemd[1]: session-7.scope: Deactivated successfully. Sep 16 04:54:06.228179 systemd[1]: session-7.scope: Consumed 4.034s CPU time, 162M memory peak. Sep 16 04:54:06.232302 systemd-logind[1535]: Session 7 logged out. Waiting for processes to exit. Sep 16 04:54:06.233664 systemd-logind[1535]: Removed session 7. Sep 16 04:54:09.151122 systemd[1]: Created slice kubepods-besteffort-pod943fd266_dda9_4571_a959_43cf5de76761.slice - libcontainer container kubepods-besteffort-pod943fd266_dda9_4571_a959_43cf5de76761.slice. Sep 16 04:54:09.233527 kubelet[2749]: I0916 04:54:09.233462 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/943fd266-dda9-4571-a959-43cf5de76761-tigera-ca-bundle\") pod \"calico-typha-fb68c8d94-gjts2\" (UID: \"943fd266-dda9-4571-a959-43cf5de76761\") " pod="calico-system/calico-typha-fb68c8d94-gjts2" Sep 16 04:54:09.233873 kubelet[2749]: I0916 04:54:09.233543 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/943fd266-dda9-4571-a959-43cf5de76761-typha-certs\") pod \"calico-typha-fb68c8d94-gjts2\" (UID: \"943fd266-dda9-4571-a959-43cf5de76761\") " pod="calico-system/calico-typha-fb68c8d94-gjts2" Sep 16 04:54:09.233873 kubelet[2749]: I0916 04:54:09.233573 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8t5s\" (UniqueName: \"kubernetes.io/projected/943fd266-dda9-4571-a959-43cf5de76761-kube-api-access-h8t5s\") pod \"calico-typha-fb68c8d94-gjts2\" (UID: \"943fd266-dda9-4571-a959-43cf5de76761\") " pod="calico-system/calico-typha-fb68c8d94-gjts2" Sep 16 04:54:09.456857 containerd[1569]: time="2025-09-16T04:54:09.456697916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-fb68c8d94-gjts2,Uid:943fd266-dda9-4571-a959-43cf5de76761,Namespace:calico-system,Attempt:0,}" Sep 16 04:54:09.494427 containerd[1569]: time="2025-09-16T04:54:09.494373307Z" level=info msg="connecting to shim 9ac4514089252242b32e4ef35fbaa04539d08142251867ed98014883ecb4e388" address="unix:///run/containerd/s/dd4f0d20f26e08e5e98e62cbdade98c36b823ab89f723568624504df0918b681" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:54:09.519441 systemd[1]: Created slice kubepods-besteffort-pod22855b96_b829_46ff_98eb_46952b82f7ce.slice - libcontainer container kubepods-besteffort-pod22855b96_b829_46ff_98eb_46952b82f7ce.slice. Sep 16 04:54:09.534699 systemd[1]: Started cri-containerd-9ac4514089252242b32e4ef35fbaa04539d08142251867ed98014883ecb4e388.scope - libcontainer container 9ac4514089252242b32e4ef35fbaa04539d08142251867ed98014883ecb4e388. Sep 16 04:54:09.536662 kubelet[2749]: I0916 04:54:09.536586 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/22855b96-b829-46ff-98eb-46952b82f7ce-cni-net-dir\") pod \"calico-node-4kzqx\" (UID: \"22855b96-b829-46ff-98eb-46952b82f7ce\") " pod="calico-system/calico-node-4kzqx" Sep 16 04:54:09.536755 kubelet[2749]: I0916 04:54:09.536681 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/22855b96-b829-46ff-98eb-46952b82f7ce-policysync\") pod \"calico-node-4kzqx\" (UID: \"22855b96-b829-46ff-98eb-46952b82f7ce\") " pod="calico-system/calico-node-4kzqx" Sep 16 04:54:09.536780 kubelet[2749]: I0916 04:54:09.536750 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/22855b96-b829-46ff-98eb-46952b82f7ce-node-certs\") pod \"calico-node-4kzqx\" (UID: \"22855b96-b829-46ff-98eb-46952b82f7ce\") " pod="calico-system/calico-node-4kzqx" Sep 16 04:54:09.537151 kubelet[2749]: I0916 04:54:09.536811 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/22855b96-b829-46ff-98eb-46952b82f7ce-var-lib-calico\") pod \"calico-node-4kzqx\" (UID: \"22855b96-b829-46ff-98eb-46952b82f7ce\") " pod="calico-system/calico-node-4kzqx" Sep 16 04:54:09.537151 kubelet[2749]: I0916 04:54:09.536848 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/22855b96-b829-46ff-98eb-46952b82f7ce-var-run-calico\") pod \"calico-node-4kzqx\" (UID: \"22855b96-b829-46ff-98eb-46952b82f7ce\") " pod="calico-system/calico-node-4kzqx" Sep 16 04:54:09.537151 kubelet[2749]: I0916 04:54:09.536864 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl7pn\" (UniqueName: \"kubernetes.io/projected/22855b96-b829-46ff-98eb-46952b82f7ce-kube-api-access-fl7pn\") pod \"calico-node-4kzqx\" (UID: \"22855b96-b829-46ff-98eb-46952b82f7ce\") " pod="calico-system/calico-node-4kzqx" Sep 16 04:54:09.537151 kubelet[2749]: I0916 04:54:09.536981 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/22855b96-b829-46ff-98eb-46952b82f7ce-cni-log-dir\") pod \"calico-node-4kzqx\" (UID: \"22855b96-b829-46ff-98eb-46952b82f7ce\") " pod="calico-system/calico-node-4kzqx" Sep 16 04:54:09.537151 kubelet[2749]: I0916 04:54:09.537012 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/22855b96-b829-46ff-98eb-46952b82f7ce-flexvol-driver-host\") pod \"calico-node-4kzqx\" (UID: \"22855b96-b829-46ff-98eb-46952b82f7ce\") " pod="calico-system/calico-node-4kzqx" Sep 16 04:54:09.537297 kubelet[2749]: I0916 04:54:09.537029 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22855b96-b829-46ff-98eb-46952b82f7ce-tigera-ca-bundle\") pod \"calico-node-4kzqx\" (UID: \"22855b96-b829-46ff-98eb-46952b82f7ce\") " pod="calico-system/calico-node-4kzqx" Sep 16 04:54:09.537297 kubelet[2749]: I0916 04:54:09.537046 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/22855b96-b829-46ff-98eb-46952b82f7ce-xtables-lock\") pod \"calico-node-4kzqx\" (UID: \"22855b96-b829-46ff-98eb-46952b82f7ce\") " pod="calico-system/calico-node-4kzqx" Sep 16 04:54:09.537297 kubelet[2749]: I0916 04:54:09.537061 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/22855b96-b829-46ff-98eb-46952b82f7ce-cni-bin-dir\") pod \"calico-node-4kzqx\" (UID: \"22855b96-b829-46ff-98eb-46952b82f7ce\") " pod="calico-system/calico-node-4kzqx" Sep 16 04:54:09.537297 kubelet[2749]: I0916 04:54:09.537086 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/22855b96-b829-46ff-98eb-46952b82f7ce-lib-modules\") pod \"calico-node-4kzqx\" (UID: \"22855b96-b829-46ff-98eb-46952b82f7ce\") " pod="calico-system/calico-node-4kzqx" Sep 16 04:54:09.587792 containerd[1569]: time="2025-09-16T04:54:09.587732321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-fb68c8d94-gjts2,Uid:943fd266-dda9-4571-a959-43cf5de76761,Namespace:calico-system,Attempt:0,} returns sandbox id \"9ac4514089252242b32e4ef35fbaa04539d08142251867ed98014883ecb4e388\"" Sep 16 04:54:09.589868 containerd[1569]: time="2025-09-16T04:54:09.589819678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 16 04:54:09.647517 kubelet[2749]: E0916 04:54:09.647435 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.647517 kubelet[2749]: W0916 04:54:09.647465 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.649333 kubelet[2749]: E0916 04:54:09.649292 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.656565 kubelet[2749]: E0916 04:54:09.656339 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.656565 kubelet[2749]: W0916 04:54:09.656354 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.656565 kubelet[2749]: E0916 04:54:09.656371 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.783170 kubelet[2749]: E0916 04:54:09.782816 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9nfwp" podUID="fb1bf5b8-6a81-4981-94ca-ffc8e11d2e06" Sep 16 04:54:09.825365 containerd[1569]: time="2025-09-16T04:54:09.825319913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4kzqx,Uid:22855b96-b829-46ff-98eb-46952b82f7ce,Namespace:calico-system,Attempt:0,}" Sep 16 04:54:09.826759 kubelet[2749]: E0916 04:54:09.826726 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.826759 kubelet[2749]: W0916 04:54:09.826751 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.826875 kubelet[2749]: E0916 04:54:09.826773 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.827315 kubelet[2749]: E0916 04:54:09.827291 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.827315 kubelet[2749]: W0916 04:54:09.827311 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.827400 kubelet[2749]: E0916 04:54:09.827331 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.827623 kubelet[2749]: E0916 04:54:09.827582 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.827670 kubelet[2749]: W0916 04:54:09.827604 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.827670 kubelet[2749]: E0916 04:54:09.827646 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.828062 kubelet[2749]: E0916 04:54:09.828037 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.828062 kubelet[2749]: W0916 04:54:09.828053 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.828062 kubelet[2749]: E0916 04:54:09.828063 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.828315 kubelet[2749]: E0916 04:54:09.828296 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.828315 kubelet[2749]: W0916 04:54:09.828311 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.828387 kubelet[2749]: E0916 04:54:09.828320 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.828708 kubelet[2749]: E0916 04:54:09.828674 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.828708 kubelet[2749]: W0916 04:54:09.828690 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.828708 kubelet[2749]: E0916 04:54:09.828698 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.829153 kubelet[2749]: E0916 04:54:09.829135 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.829153 kubelet[2749]: W0916 04:54:09.829149 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.829238 kubelet[2749]: E0916 04:54:09.829157 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.829574 kubelet[2749]: E0916 04:54:09.829553 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.829574 kubelet[2749]: W0916 04:54:09.829567 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.829574 kubelet[2749]: E0916 04:54:09.829575 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.829989 kubelet[2749]: E0916 04:54:09.829960 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.829989 kubelet[2749]: W0916 04:54:09.829974 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.829989 kubelet[2749]: E0916 04:54:09.829984 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.830482 kubelet[2749]: E0916 04:54:09.830439 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.830482 kubelet[2749]: W0916 04:54:09.830453 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.830482 kubelet[2749]: E0916 04:54:09.830461 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.830954 kubelet[2749]: E0916 04:54:09.830934 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.830954 kubelet[2749]: W0916 04:54:09.830949 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.831022 kubelet[2749]: E0916 04:54:09.830957 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.832694 kubelet[2749]: E0916 04:54:09.832630 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.832694 kubelet[2749]: W0916 04:54:09.832643 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.832694 kubelet[2749]: E0916 04:54:09.832652 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.833701 kubelet[2749]: E0916 04:54:09.833677 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.833746 kubelet[2749]: W0916 04:54:09.833707 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.833746 kubelet[2749]: E0916 04:54:09.833718 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.834023 kubelet[2749]: E0916 04:54:09.833891 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.834023 kubelet[2749]: W0916 04:54:09.833904 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.834023 kubelet[2749]: E0916 04:54:09.833912 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.834234 kubelet[2749]: E0916 04:54:09.834210 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.834234 kubelet[2749]: W0916 04:54:09.834229 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.834307 kubelet[2749]: E0916 04:54:09.834243 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.834535 kubelet[2749]: E0916 04:54:09.834451 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.834535 kubelet[2749]: W0916 04:54:09.834476 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.834591 kubelet[2749]: E0916 04:54:09.834488 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.835212 kubelet[2749]: E0916 04:54:09.835186 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.835212 kubelet[2749]: W0916 04:54:09.835210 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.835325 kubelet[2749]: E0916 04:54:09.835219 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.835580 kubelet[2749]: E0916 04:54:09.835546 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.835580 kubelet[2749]: W0916 04:54:09.835561 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.835580 kubelet[2749]: E0916 04:54:09.835570 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.835868 kubelet[2749]: E0916 04:54:09.835747 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.835868 kubelet[2749]: W0916 04:54:09.835762 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.835868 kubelet[2749]: E0916 04:54:09.835773 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.836680 kubelet[2749]: E0916 04:54:09.836643 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.836680 kubelet[2749]: W0916 04:54:09.836659 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.836778 kubelet[2749]: E0916 04:54:09.836689 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.838769 kubelet[2749]: E0916 04:54:09.838640 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.838769 kubelet[2749]: W0916 04:54:09.838651 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.838769 kubelet[2749]: E0916 04:54:09.838661 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.838769 kubelet[2749]: I0916 04:54:09.838701 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fb1bf5b8-6a81-4981-94ca-ffc8e11d2e06-socket-dir\") pod \"csi-node-driver-9nfwp\" (UID: \"fb1bf5b8-6a81-4981-94ca-ffc8e11d2e06\") " pod="calico-system/csi-node-driver-9nfwp" Sep 16 04:54:09.838902 kubelet[2749]: E0916 04:54:09.838855 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.838902 kubelet[2749]: W0916 04:54:09.838863 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.838902 kubelet[2749]: E0916 04:54:09.838876 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.839037 kubelet[2749]: E0916 04:54:09.839016 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.839037 kubelet[2749]: W0916 04:54:09.839033 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.839088 kubelet[2749]: E0916 04:54:09.839041 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.839455 kubelet[2749]: I0916 04:54:09.839125 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82p5q\" (UniqueName: \"kubernetes.io/projected/fb1bf5b8-6a81-4981-94ca-ffc8e11d2e06-kube-api-access-82p5q\") pod \"csi-node-driver-9nfwp\" (UID: \"fb1bf5b8-6a81-4981-94ca-ffc8e11d2e06\") " pod="calico-system/csi-node-driver-9nfwp" Sep 16 04:54:09.839455 kubelet[2749]: E0916 04:54:09.839271 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.839455 kubelet[2749]: W0916 04:54:09.839280 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.839455 kubelet[2749]: E0916 04:54:09.839287 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.840564 kubelet[2749]: E0916 04:54:09.840549 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.840564 kubelet[2749]: W0916 04:54:09.840563 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.840624 kubelet[2749]: E0916 04:54:09.840575 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.840785 kubelet[2749]: E0916 04:54:09.840760 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.840815 kubelet[2749]: W0916 04:54:09.840786 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.840885 kubelet[2749]: E0916 04:54:09.840870 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.840978 kubelet[2749]: E0916 04:54:09.840962 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.840978 kubelet[2749]: W0916 04:54:09.840974 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.840978 kubelet[2749]: E0916 04:54:09.840981 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.841327 kubelet[2749]: I0916 04:54:09.841232 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/fb1bf5b8-6a81-4981-94ca-ffc8e11d2e06-varrun\") pod \"csi-node-driver-9nfwp\" (UID: \"fb1bf5b8-6a81-4981-94ca-ffc8e11d2e06\") " pod="calico-system/csi-node-driver-9nfwp" Sep 16 04:54:09.841622 kubelet[2749]: E0916 04:54:09.841488 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.841622 kubelet[2749]: W0916 04:54:09.841619 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.841680 kubelet[2749]: E0916 04:54:09.841633 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.843580 kubelet[2749]: E0916 04:54:09.843544 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.843580 kubelet[2749]: W0916 04:54:09.843560 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.843580 kubelet[2749]: E0916 04:54:09.843579 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.843746 kubelet[2749]: E0916 04:54:09.843728 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.843746 kubelet[2749]: W0916 04:54:09.843738 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.843746 kubelet[2749]: E0916 04:54:09.843746 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.843836 kubelet[2749]: I0916 04:54:09.843770 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fb1bf5b8-6a81-4981-94ca-ffc8e11d2e06-kubelet-dir\") pod \"csi-node-driver-9nfwp\" (UID: \"fb1bf5b8-6a81-4981-94ca-ffc8e11d2e06\") " pod="calico-system/csi-node-driver-9nfwp" Sep 16 04:54:09.843915 kubelet[2749]: E0916 04:54:09.843897 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.843915 kubelet[2749]: W0916 04:54:09.843911 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.844003 kubelet[2749]: E0916 04:54:09.843929 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.844003 kubelet[2749]: I0916 04:54:09.843941 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fb1bf5b8-6a81-4981-94ca-ffc8e11d2e06-registration-dir\") pod \"csi-node-driver-9nfwp\" (UID: \"fb1bf5b8-6a81-4981-94ca-ffc8e11d2e06\") " pod="calico-system/csi-node-driver-9nfwp" Sep 16 04:54:09.844090 kubelet[2749]: E0916 04:54:09.844072 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.844090 kubelet[2749]: W0916 04:54:09.844086 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.844167 kubelet[2749]: E0916 04:54:09.844095 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.844218 kubelet[2749]: E0916 04:54:09.844191 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.844247 kubelet[2749]: W0916 04:54:09.844214 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.844267 kubelet[2749]: E0916 04:54:09.844248 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.844382 kubelet[2749]: E0916 04:54:09.844364 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.844382 kubelet[2749]: W0916 04:54:09.844379 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.844547 kubelet[2749]: E0916 04:54:09.844386 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.844594 kubelet[2749]: E0916 04:54:09.844552 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.844594 kubelet[2749]: W0916 04:54:09.844570 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.844594 kubelet[2749]: E0916 04:54:09.844578 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.857379 containerd[1569]: time="2025-09-16T04:54:09.857310442Z" level=info msg="connecting to shim 5e38c5e2b91d5467f8859612b961d88c7bb11c3b0357a8cf82460e979afcf15c" address="unix:///run/containerd/s/06a3f212940dfa80d2cdc5644ce5feb380bb56bee31b685f033e7c2e9c9b84e6" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:54:09.888813 systemd[1]: Started cri-containerd-5e38c5e2b91d5467f8859612b961d88c7bb11c3b0357a8cf82460e979afcf15c.scope - libcontainer container 5e38c5e2b91d5467f8859612b961d88c7bb11c3b0357a8cf82460e979afcf15c. Sep 16 04:54:09.944903 kubelet[2749]: E0916 04:54:09.944863 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.944903 kubelet[2749]: W0916 04:54:09.944891 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.944903 kubelet[2749]: E0916 04:54:09.944916 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.945376 kubelet[2749]: E0916 04:54:09.945124 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.945376 kubelet[2749]: W0916 04:54:09.945132 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.945376 kubelet[2749]: E0916 04:54:09.945208 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.945439 kubelet[2749]: E0916 04:54:09.945384 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.945439 kubelet[2749]: W0916 04:54:09.945393 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.945439 kubelet[2749]: E0916 04:54:09.945411 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.945895 kubelet[2749]: E0916 04:54:09.945570 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.945895 kubelet[2749]: W0916 04:54:09.945577 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.945895 kubelet[2749]: E0916 04:54:09.945594 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.946186 kubelet[2749]: E0916 04:54:09.945941 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.946186 kubelet[2749]: W0916 04:54:09.945962 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.946186 kubelet[2749]: E0916 04:54:09.945993 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.947407 kubelet[2749]: E0916 04:54:09.946563 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.947407 kubelet[2749]: W0916 04:54:09.946577 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.947407 kubelet[2749]: E0916 04:54:09.947375 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.947858 kubelet[2749]: E0916 04:54:09.947748 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.947858 kubelet[2749]: W0916 04:54:09.947759 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.947858 kubelet[2749]: E0916 04:54:09.947794 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.947995 kubelet[2749]: E0916 04:54:09.947985 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.948465 kubelet[2749]: W0916 04:54:09.948372 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.948465 kubelet[2749]: E0916 04:54:09.948422 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.948786 kubelet[2749]: E0916 04:54:09.948570 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.948786 kubelet[2749]: W0916 04:54:09.948578 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.948786 kubelet[2749]: E0916 04:54:09.948616 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.950631 kubelet[2749]: E0916 04:54:09.950578 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.950631 kubelet[2749]: W0916 04:54:09.950601 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.951552 kubelet[2749]: E0916 04:54:09.950700 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.951552 kubelet[2749]: E0916 04:54:09.950749 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.951552 kubelet[2749]: W0916 04:54:09.950755 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.951552 kubelet[2749]: E0916 04:54:09.950835 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.951790 kubelet[2749]: E0916 04:54:09.951706 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.951790 kubelet[2749]: W0916 04:54:09.951718 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.951939 kubelet[2749]: E0916 04:54:09.951824 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.952443 kubelet[2749]: E0916 04:54:09.952329 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.952443 kubelet[2749]: W0916 04:54:09.952368 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.952443 kubelet[2749]: E0916 04:54:09.952427 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.953107 kubelet[2749]: E0916 04:54:09.953036 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.953583 kubelet[2749]: W0916 04:54:09.953532 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.953641 kubelet[2749]: E0916 04:54:09.953605 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.954056 kubelet[2749]: E0916 04:54:09.953855 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.954294 kubelet[2749]: W0916 04:54:09.954111 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.954294 kubelet[2749]: E0916 04:54:09.954145 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.954893 kubelet[2749]: E0916 04:54:09.954825 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.954893 kubelet[2749]: W0916 04:54:09.954835 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.954893 kubelet[2749]: E0916 04:54:09.954850 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.955190 kubelet[2749]: E0916 04:54:09.955163 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.955190 kubelet[2749]: W0916 04:54:09.955180 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.955980 kubelet[2749]: E0916 04:54:09.955955 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.956025 kubelet[2749]: E0916 04:54:09.955806 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.956025 kubelet[2749]: W0916 04:54:09.956011 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.956255 kubelet[2749]: E0916 04:54:09.956229 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.956574 kubelet[2749]: E0916 04:54:09.956434 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.956574 kubelet[2749]: W0916 04:54:09.956561 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.956914 kubelet[2749]: E0916 04:54:09.956892 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.957180 kubelet[2749]: E0916 04:54:09.957159 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.957298 kubelet[2749]: W0916 04:54:09.957277 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.957520 kubelet[2749]: E0916 04:54:09.957449 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.958118 kubelet[2749]: E0916 04:54:09.958093 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.958118 kubelet[2749]: W0916 04:54:09.958109 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.958375 kubelet[2749]: E0916 04:54:09.958349 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.958704 kubelet[2749]: E0916 04:54:09.958680 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.958704 kubelet[2749]: W0916 04:54:09.958695 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.958942 kubelet[2749]: E0916 04:54:09.958918 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.959357 containerd[1569]: time="2025-09-16T04:54:09.958807066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4kzqx,Uid:22855b96-b829-46ff-98eb-46952b82f7ce,Namespace:calico-system,Attempt:0,} returns sandbox id \"5e38c5e2b91d5467f8859612b961d88c7bb11c3b0357a8cf82460e979afcf15c\"" Sep 16 04:54:09.959557 kubelet[2749]: E0916 04:54:09.959537 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.959557 kubelet[2749]: W0916 04:54:09.959552 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.959745 kubelet[2749]: E0916 04:54:09.959723 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.960182 kubelet[2749]: E0916 04:54:09.960159 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.960182 kubelet[2749]: W0916 04:54:09.960177 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.960749 kubelet[2749]: E0916 04:54:09.960726 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.960749 kubelet[2749]: W0916 04:54:09.960744 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.960798 kubelet[2749]: E0916 04:54:09.960753 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.961843 kubelet[2749]: E0916 04:54:09.961723 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:09.973230 kubelet[2749]: E0916 04:54:09.973183 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:09.973230 kubelet[2749]: W0916 04:54:09.973213 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:09.973230 kubelet[2749]: E0916 04:54:09.973226 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:11.190378 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4016548535.mount: Deactivated successfully. Sep 16 04:54:11.303428 kubelet[2749]: E0916 04:54:11.303358 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9nfwp" podUID="fb1bf5b8-6a81-4981-94ca-ffc8e11d2e06" Sep 16 04:54:11.717100 containerd[1569]: time="2025-09-16T04:54:11.717037663Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:11.718148 containerd[1569]: time="2025-09-16T04:54:11.717977167Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 16 04:54:11.719107 containerd[1569]: time="2025-09-16T04:54:11.719079681Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:11.721237 containerd[1569]: time="2025-09-16T04:54:11.721216107Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:11.743877 containerd[1569]: time="2025-09-16T04:54:11.742375902Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.152523983s" Sep 16 04:54:11.743877 containerd[1569]: time="2025-09-16T04:54:11.742420896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 16 04:54:11.777744 containerd[1569]: time="2025-09-16T04:54:11.776948356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 16 04:54:11.786102 containerd[1569]: time="2025-09-16T04:54:11.786047336Z" level=info msg="CreateContainer within sandbox \"9ac4514089252242b32e4ef35fbaa04539d08142251867ed98014883ecb4e388\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 16 04:54:11.806453 containerd[1569]: time="2025-09-16T04:54:11.805714270Z" level=info msg="Container f29176b57cf98185216dd5896b2d735526232b936cee595fcfd03293227ab5ce: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:54:11.825190 containerd[1569]: time="2025-09-16T04:54:11.825144880Z" level=info msg="CreateContainer within sandbox \"9ac4514089252242b32e4ef35fbaa04539d08142251867ed98014883ecb4e388\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f29176b57cf98185216dd5896b2d735526232b936cee595fcfd03293227ab5ce\"" Sep 16 04:54:11.825873 containerd[1569]: time="2025-09-16T04:54:11.825830424Z" level=info msg="StartContainer for \"f29176b57cf98185216dd5896b2d735526232b936cee595fcfd03293227ab5ce\"" Sep 16 04:54:11.827147 containerd[1569]: time="2025-09-16T04:54:11.827068693Z" level=info msg="connecting to shim f29176b57cf98185216dd5896b2d735526232b936cee595fcfd03293227ab5ce" address="unix:///run/containerd/s/dd4f0d20f26e08e5e98e62cbdade98c36b823ab89f723568624504df0918b681" protocol=ttrpc version=3 Sep 16 04:54:11.851681 systemd[1]: Started cri-containerd-f29176b57cf98185216dd5896b2d735526232b936cee595fcfd03293227ab5ce.scope - libcontainer container f29176b57cf98185216dd5896b2d735526232b936cee595fcfd03293227ab5ce. Sep 16 04:54:11.904429 containerd[1569]: time="2025-09-16T04:54:11.904396922Z" level=info msg="StartContainer for \"f29176b57cf98185216dd5896b2d735526232b936cee595fcfd03293227ab5ce\" returns successfully" Sep 16 04:54:12.306192 kubelet[2749]: E0916 04:54:12.306144 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9nfwp" podUID="fb1bf5b8-6a81-4981-94ca-ffc8e11d2e06" Sep 16 04:54:12.393719 kubelet[2749]: I0916 04:54:12.393626 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-fb68c8d94-gjts2" podStartSLOduration=1.222381267 podStartE2EDuration="3.393603795s" podCreationTimestamp="2025-09-16 04:54:09 +0000 UTC" firstStartedPulling="2025-09-16 04:54:09.589365921 +0000 UTC m=+19.390417127" lastFinishedPulling="2025-09-16 04:54:11.760588439 +0000 UTC m=+21.561639655" observedRunningTime="2025-09-16 04:54:12.392729254 +0000 UTC m=+22.193780459" watchObservedRunningTime="2025-09-16 04:54:12.393603795 +0000 UTC m=+22.194655002" Sep 16 04:54:12.454409 kubelet[2749]: E0916 04:54:12.454371 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:12.454409 kubelet[2749]: W0916 04:54:12.454397 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:12.454638 kubelet[2749]: E0916 04:54:12.454419 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:12.454669 kubelet[2749]: E0916 04:54:12.454656 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:12.454694 kubelet[2749]: W0916 04:54:12.454671 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:12.454725 kubelet[2749]: E0916 04:54:12.454694 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:12.454951 kubelet[2749]: E0916 04:54:12.454930 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:12.455001 kubelet[2749]: W0916 04:54:12.454964 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:12.455001 kubelet[2749]: E0916 04:54:12.454975 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:12.455269 kubelet[2749]: E0916 04:54:12.455246 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:12.455269 kubelet[2749]: W0916 04:54:12.455259 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:12.455269 kubelet[2749]: E0916 04:54:12.455268 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:12.456084 kubelet[2749]: E0916 04:54:12.455410 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:12.456084 kubelet[2749]: W0916 04:54:12.455418 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:12.456084 kubelet[2749]: E0916 04:54:12.455426 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:12.456084 kubelet[2749]: E0916 04:54:12.455630 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:12.456084 kubelet[2749]: W0916 04:54:12.455645 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:12.456084 kubelet[2749]: E0916 04:54:12.455660 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:12.456084 kubelet[2749]: E0916 04:54:12.455901 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:12.456084 kubelet[2749]: W0916 04:54:12.455917 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:12.456084 kubelet[2749]: E0916 04:54:12.455933 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:12.456416 kubelet[2749]: E0916 04:54:12.456393 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:12.456416 kubelet[2749]: W0916 04:54:12.456407 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:12.456416 kubelet[2749]: E0916 04:54:12.456419 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:12.456592 kubelet[2749]: E0916 04:54:12.456580 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:12.456592 kubelet[2749]: W0916 04:54:12.456591 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:12.456665 kubelet[2749]: E0916 04:54:12.456599 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:12.456736 kubelet[2749]: E0916 04:54:12.456716 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:12.456736 kubelet[2749]: W0916 04:54:12.456730 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:12.456803 kubelet[2749]: E0916 04:54:12.456737 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:12.456870 kubelet[2749]: E0916 04:54:12.456849 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:12.456870 kubelet[2749]: W0916 04:54:12.456862 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:12.456870 kubelet[2749]: E0916 04:54:12.456869 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:12.457015 kubelet[2749]: E0916 04:54:12.456981 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:12.457015 kubelet[2749]: W0916 04:54:12.456995 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:12.457015 kubelet[2749]: E0916 04:54:12.457002 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:12.457144 kubelet[2749]: E0916 04:54:12.457122 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:12.457144 kubelet[2749]: W0916 04:54:12.457132 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:12.457144 kubelet[2749]: E0916 04:54:12.457141 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:12.457279 kubelet[2749]: E0916 04:54:12.457266 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:12.457279 kubelet[2749]: W0916 04:54:12.457273 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:12.457360 kubelet[2749]: E0916 04:54:12.457281 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:12.457415 kubelet[2749]: E0916 04:54:12.457389 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:12.457415 kubelet[2749]: W0916 04:54:12.457396 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:12.457415 kubelet[2749]: E0916 04:54:12.457402 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:12.463806 kubelet[2749]: E0916 04:54:12.463761 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:12.463806 kubelet[2749]: W0916 04:54:12.463785 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:12.463806 kubelet[2749]: E0916 04:54:12.463803 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:12.464045 kubelet[2749]: E0916 04:54:12.463987 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:12.464045 kubelet[2749]: W0916 04:54:12.463995 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:12.464045 kubelet[2749]: E0916 04:54:12.464003 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:12.464284 kubelet[2749]: E0916 04:54:12.464261 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:12.464284 kubelet[2749]: W0916 04:54:12.464278 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:12.464348 kubelet[2749]: E0916 04:54:12.464293 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:12.464464 kubelet[2749]: E0916 04:54:12.464418 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:12.464464 kubelet[2749]: W0916 04:54:12.464430 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:12.464464 kubelet[2749]: E0916 04:54:12.464438 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:12.464636 kubelet[2749]: E0916 04:54:12.464614 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:12.464636 kubelet[2749]: W0916 04:54:12.464628 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:12.464692 kubelet[2749]: E0916 04:54:12.464651 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:12.464897 kubelet[2749]: E0916 04:54:12.464876 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:12.464897 kubelet[2749]: W0916 04:54:12.464892 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:12.464966 kubelet[2749]: E0916 04:54:12.464919 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:12.465142 kubelet[2749]: E0916 04:54:12.465109 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:12.465142 kubelet[2749]: W0916 04:54:12.465124 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:12.465142 kubelet[2749]: E0916 04:54:12.465133 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:12.465417 kubelet[2749]: E0916 04:54:12.465395 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:12.465417 kubelet[2749]: W0916 04:54:12.465409 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:12.465556 kubelet[2749]: E0916 04:54:12.465417 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:12.465621 kubelet[2749]: E0916 04:54:12.465585 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:12.465621 kubelet[2749]: W0916 04:54:12.465598 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:12.465621 kubelet[2749]: E0916 04:54:12.465611 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:12.465794 kubelet[2749]: E0916 04:54:12.465776 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:12.465794 kubelet[2749]: W0916 04:54:12.465788 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:12.465852 kubelet[2749]: E0916 04:54:12.465801 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:12.465955 kubelet[2749]: E0916 04:54:12.465921 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:12.465955 kubelet[2749]: W0916 04:54:12.465933 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:12.465955 kubelet[2749]: E0916 04:54:12.465941 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:12.466119 kubelet[2749]: E0916 04:54:12.466100 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:12.466119 kubelet[2749]: W0916 04:54:12.466113 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:12.466174 kubelet[2749]: E0916 04:54:12.466124 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:12.466295 kubelet[2749]: E0916 04:54:12.466279 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:12.466295 kubelet[2749]: W0916 04:54:12.466290 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:12.466351 kubelet[2749]: E0916 04:54:12.466307 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:12.466518 kubelet[2749]: E0916 04:54:12.466485 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:12.466566 kubelet[2749]: W0916 04:54:12.466522 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:12.466566 kubelet[2749]: E0916 04:54:12.466537 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:12.467061 kubelet[2749]: E0916 04:54:12.467042 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:12.467061 kubelet[2749]: W0916 04:54:12.467056 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:12.467151 kubelet[2749]: E0916 04:54:12.467069 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:12.467271 kubelet[2749]: E0916 04:54:12.467245 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:12.467271 kubelet[2749]: W0916 04:54:12.467264 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:12.467795 kubelet[2749]: E0916 04:54:12.467286 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:12.467795 kubelet[2749]: E0916 04:54:12.467456 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:12.467795 kubelet[2749]: W0916 04:54:12.467464 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:12.467795 kubelet[2749]: E0916 04:54:12.467473 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:12.467795 kubelet[2749]: E0916 04:54:12.467747 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:12.467795 kubelet[2749]: W0916 04:54:12.467756 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:12.467795 kubelet[2749]: E0916 04:54:12.467765 2749 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:13.291720 containerd[1569]: time="2025-09-16T04:54:13.291327163Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:13.306676 containerd[1569]: time="2025-09-16T04:54:13.292672382Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 16 04:54:13.323004 containerd[1569]: time="2025-09-16T04:54:13.322949160Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:13.324426 containerd[1569]: time="2025-09-16T04:54:13.323597934Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.546614903s" Sep 16 04:54:13.324426 containerd[1569]: time="2025-09-16T04:54:13.323646246Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 16 04:54:13.324426 containerd[1569]: time="2025-09-16T04:54:13.323949938Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:13.325898 containerd[1569]: time="2025-09-16T04:54:13.325869943Z" level=info msg="CreateContainer within sandbox \"5e38c5e2b91d5467f8859612b961d88c7bb11c3b0357a8cf82460e979afcf15c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 16 04:54:13.335437 containerd[1569]: time="2025-09-16T04:54:13.335393117Z" level=info msg="Container 816c502347b0026cd5a1955c620d35c5b98e6f2c3f0318b876048d10f51cd751: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:54:13.355310 containerd[1569]: time="2025-09-16T04:54:13.355260096Z" level=info msg="CreateContainer within sandbox \"5e38c5e2b91d5467f8859612b961d88c7bb11c3b0357a8cf82460e979afcf15c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"816c502347b0026cd5a1955c620d35c5b98e6f2c3f0318b876048d10f51cd751\"" Sep 16 04:54:13.356025 containerd[1569]: time="2025-09-16T04:54:13.355980306Z" level=info msg="StartContainer for \"816c502347b0026cd5a1955c620d35c5b98e6f2c3f0318b876048d10f51cd751\"" Sep 16 04:54:13.357195 containerd[1569]: time="2025-09-16T04:54:13.357151015Z" level=info msg="connecting to shim 816c502347b0026cd5a1955c620d35c5b98e6f2c3f0318b876048d10f51cd751" address="unix:///run/containerd/s/06a3f212940dfa80d2cdc5644ce5feb380bb56bee31b685f033e7c2e9c9b84e6" protocol=ttrpc version=3 Sep 16 04:54:13.382703 systemd[1]: Started cri-containerd-816c502347b0026cd5a1955c620d35c5b98e6f2c3f0318b876048d10f51cd751.scope - libcontainer container 816c502347b0026cd5a1955c620d35c5b98e6f2c3f0318b876048d10f51cd751. Sep 16 04:54:13.386194 kubelet[2749]: I0916 04:54:13.386064 2749 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 16 04:54:13.418545 containerd[1569]: time="2025-09-16T04:54:13.418385895Z" level=info msg="StartContainer for \"816c502347b0026cd5a1955c620d35c5b98e6f2c3f0318b876048d10f51cd751\" returns successfully" Sep 16 04:54:13.422955 systemd[1]: cri-containerd-816c502347b0026cd5a1955c620d35c5b98e6f2c3f0318b876048d10f51cd751.scope: Deactivated successfully. Sep 16 04:54:13.432203 containerd[1569]: time="2025-09-16T04:54:13.432145907Z" level=info msg="received exit event container_id:\"816c502347b0026cd5a1955c620d35c5b98e6f2c3f0318b876048d10f51cd751\" id:\"816c502347b0026cd5a1955c620d35c5b98e6f2c3f0318b876048d10f51cd751\" pid:3445 exited_at:{seconds:1757998453 nanos:425840606}" Sep 16 04:54:13.461634 containerd[1569]: time="2025-09-16T04:54:13.461371941Z" level=info msg="TaskExit event in podsandbox handler container_id:\"816c502347b0026cd5a1955c620d35c5b98e6f2c3f0318b876048d10f51cd751\" id:\"816c502347b0026cd5a1955c620d35c5b98e6f2c3f0318b876048d10f51cd751\" pid:3445 exited_at:{seconds:1757998453 nanos:425840606}" Sep 16 04:54:13.483326 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-816c502347b0026cd5a1955c620d35c5b98e6f2c3f0318b876048d10f51cd751-rootfs.mount: Deactivated successfully. Sep 16 04:54:14.306783 kubelet[2749]: E0916 04:54:14.306731 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9nfwp" podUID="fb1bf5b8-6a81-4981-94ca-ffc8e11d2e06" Sep 16 04:54:14.392527 containerd[1569]: time="2025-09-16T04:54:14.392139725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 16 04:54:16.303476 kubelet[2749]: E0916 04:54:16.303402 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9nfwp" podUID="fb1bf5b8-6a81-4981-94ca-ffc8e11d2e06" Sep 16 04:54:17.968844 containerd[1569]: time="2025-09-16T04:54:17.967850907Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:17.969617 containerd[1569]: time="2025-09-16T04:54:17.969572993Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 16 04:54:17.970135 containerd[1569]: time="2025-09-16T04:54:17.970086811Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:17.972749 containerd[1569]: time="2025-09-16T04:54:17.972716057Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:17.973363 containerd[1569]: time="2025-09-16T04:54:17.973337808Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 3.581157327s" Sep 16 04:54:17.973459 containerd[1569]: time="2025-09-16T04:54:17.973439350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 16 04:54:17.976662 containerd[1569]: time="2025-09-16T04:54:17.976631286Z" level=info msg="CreateContainer within sandbox \"5e38c5e2b91d5467f8859612b961d88c7bb11c3b0357a8cf82460e979afcf15c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 16 04:54:18.003804 containerd[1569]: time="2025-09-16T04:54:18.000592394Z" level=info msg="Container 2b714d3b57de935e84edb78e9cb22f53ea46510ea5051c2773e1032c83750be6: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:54:18.014622 containerd[1569]: time="2025-09-16T04:54:18.014563398Z" level=info msg="CreateContainer within sandbox \"5e38c5e2b91d5467f8859612b961d88c7bb11c3b0357a8cf82460e979afcf15c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2b714d3b57de935e84edb78e9cb22f53ea46510ea5051c2773e1032c83750be6\"" Sep 16 04:54:18.015364 containerd[1569]: time="2025-09-16T04:54:18.015317518Z" level=info msg="StartContainer for \"2b714d3b57de935e84edb78e9cb22f53ea46510ea5051c2773e1032c83750be6\"" Sep 16 04:54:18.020756 containerd[1569]: time="2025-09-16T04:54:18.020719005Z" level=info msg="connecting to shim 2b714d3b57de935e84edb78e9cb22f53ea46510ea5051c2773e1032c83750be6" address="unix:///run/containerd/s/06a3f212940dfa80d2cdc5644ce5feb380bb56bee31b685f033e7c2e9c9b84e6" protocol=ttrpc version=3 Sep 16 04:54:18.046633 systemd[1]: Started cri-containerd-2b714d3b57de935e84edb78e9cb22f53ea46510ea5051c2773e1032c83750be6.scope - libcontainer container 2b714d3b57de935e84edb78e9cb22f53ea46510ea5051c2773e1032c83750be6. Sep 16 04:54:18.094600 containerd[1569]: time="2025-09-16T04:54:18.094522105Z" level=info msg="StartContainer for \"2b714d3b57de935e84edb78e9cb22f53ea46510ea5051c2773e1032c83750be6\" returns successfully" Sep 16 04:54:18.305085 kubelet[2749]: E0916 04:54:18.304762 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9nfwp" podUID="fb1bf5b8-6a81-4981-94ca-ffc8e11d2e06" Sep 16 04:54:18.500487 systemd[1]: cri-containerd-2b714d3b57de935e84edb78e9cb22f53ea46510ea5051c2773e1032c83750be6.scope: Deactivated successfully. Sep 16 04:54:18.500717 systemd[1]: cri-containerd-2b714d3b57de935e84edb78e9cb22f53ea46510ea5051c2773e1032c83750be6.scope: Consumed 370ms CPU time, 167.3M memory peak, 14.1M read from disk, 171.3M written to disk. Sep 16 04:54:18.545538 containerd[1569]: time="2025-09-16T04:54:18.545171612Z" level=info msg="received exit event container_id:\"2b714d3b57de935e84edb78e9cb22f53ea46510ea5051c2773e1032c83750be6\" id:\"2b714d3b57de935e84edb78e9cb22f53ea46510ea5051c2773e1032c83750be6\" pid:3503 exited_at:{seconds:1757998458 nanos:544924918}" Sep 16 04:54:18.546562 kubelet[2749]: I0916 04:54:18.546473 2749 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 16 04:54:18.591454 containerd[1569]: time="2025-09-16T04:54:18.591336630Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2b714d3b57de935e84edb78e9cb22f53ea46510ea5051c2773e1032c83750be6\" id:\"2b714d3b57de935e84edb78e9cb22f53ea46510ea5051c2773e1032c83750be6\" pid:3503 exited_at:{seconds:1757998458 nanos:544924918}" Sep 16 04:54:18.613109 kubelet[2749]: I0916 04:54:18.613075 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rcn5\" (UniqueName: \"kubernetes.io/projected/c3b36834-9d58-46a1-ab2f-ae1eac82c896-kube-api-access-7rcn5\") pod \"whisker-7d79b78d5c-4j8xt\" (UID: \"c3b36834-9d58-46a1-ab2f-ae1eac82c896\") " pod="calico-system/whisker-7d79b78d5c-4j8xt" Sep 16 04:54:18.613229 kubelet[2749]: I0916 04:54:18.613115 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3b36834-9d58-46a1-ab2f-ae1eac82c896-whisker-ca-bundle\") pod \"whisker-7d79b78d5c-4j8xt\" (UID: \"c3b36834-9d58-46a1-ab2f-ae1eac82c896\") " pod="calico-system/whisker-7d79b78d5c-4j8xt" Sep 16 04:54:18.613229 kubelet[2749]: I0916 04:54:18.613132 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c3b36834-9d58-46a1-ab2f-ae1eac82c896-whisker-backend-key-pair\") pod \"whisker-7d79b78d5c-4j8xt\" (UID: \"c3b36834-9d58-46a1-ab2f-ae1eac82c896\") " pod="calico-system/whisker-7d79b78d5c-4j8xt" Sep 16 04:54:18.618837 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b714d3b57de935e84edb78e9cb22f53ea46510ea5051c2773e1032c83750be6-rootfs.mount: Deactivated successfully. Sep 16 04:54:18.625982 systemd[1]: Created slice kubepods-besteffort-podc3b36834_9d58_46a1_ab2f_ae1eac82c896.slice - libcontainer container kubepods-besteffort-podc3b36834_9d58_46a1_ab2f_ae1eac82c896.slice. Sep 16 04:54:18.639556 systemd[1]: Created slice kubepods-besteffort-podf5f9912e_f440_4935_b278_a8b4b136786d.slice - libcontainer container kubepods-besteffort-podf5f9912e_f440_4935_b278_a8b4b136786d.slice. Sep 16 04:54:18.647521 kubelet[2749]: W0916 04:54:18.646387 2749 reflector.go:569] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4459-0-0-n-200d586c0a" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4459-0-0-n-200d586c0a' and this object Sep 16 04:54:18.647682 kubelet[2749]: E0916 04:54:18.647660 2749 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4459-0-0-n-200d586c0a\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4459-0-0-n-200d586c0a' and this object" logger="UnhandledError" Sep 16 04:54:18.651534 systemd[1]: Created slice kubepods-burstable-pod6e8e3fa2_5116_467f_86f3_f0897fcfaddf.slice - libcontainer container kubepods-burstable-pod6e8e3fa2_5116_467f_86f3_f0897fcfaddf.slice. Sep 16 04:54:18.660225 systemd[1]: Created slice kubepods-besteffort-podd9df7419_2dd3_4168_a272_4c33962cc46a.slice - libcontainer container kubepods-besteffort-podd9df7419_2dd3_4168_a272_4c33962cc46a.slice. Sep 16 04:54:18.669063 systemd[1]: Created slice kubepods-burstable-podc7b14ac7_1066_42fb_b1ca_cd746b21268d.slice - libcontainer container kubepods-burstable-podc7b14ac7_1066_42fb_b1ca_cd746b21268d.slice. Sep 16 04:54:18.678880 systemd[1]: Created slice kubepods-besteffort-podf4cc43e3_456a_4c62_a062_12998d6b9b30.slice - libcontainer container kubepods-besteffort-podf4cc43e3_456a_4c62_a062_12998d6b9b30.slice. Sep 16 04:54:18.688538 systemd[1]: Created slice kubepods-besteffort-pod2cf15689_32b5_4778_b0c2_c93602dd5a17.slice - libcontainer container kubepods-besteffort-pod2cf15689_32b5_4778_b0c2_c93602dd5a17.slice. Sep 16 04:54:18.713692 kubelet[2749]: I0916 04:54:18.713661 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftj5h\" (UniqueName: \"kubernetes.io/projected/d9df7419-2dd3-4168-a272-4c33962cc46a-kube-api-access-ftj5h\") pod \"calico-kube-controllers-79f564b968-87w2g\" (UID: \"d9df7419-2dd3-4168-a272-4c33962cc46a\") " pod="calico-system/calico-kube-controllers-79f564b968-87w2g" Sep 16 04:54:18.714596 kubelet[2749]: I0916 04:54:18.714350 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f4cc43e3-456a-4c62-a062-12998d6b9b30-calico-apiserver-certs\") pod \"calico-apiserver-586c95c678-wshk4\" (UID: \"f4cc43e3-456a-4c62-a062-12998d6b9b30\") " pod="calico-apiserver/calico-apiserver-586c95c678-wshk4" Sep 16 04:54:18.714596 kubelet[2749]: I0916 04:54:18.714374 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2cf15689-32b5-4778-b0c2-c93602dd5a17-calico-apiserver-certs\") pod \"calico-apiserver-586c95c678-xqkfq\" (UID: \"2cf15689-32b5-4778-b0c2-c93602dd5a17\") " pod="calico-apiserver/calico-apiserver-586c95c678-xqkfq" Sep 16 04:54:18.714596 kubelet[2749]: I0916 04:54:18.714410 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5f9912e-f440-4935-b278-a8b4b136786d-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-725zd\" (UID: \"f5f9912e-f440-4935-b278-a8b4b136786d\") " pod="calico-system/goldmane-54d579b49d-725zd" Sep 16 04:54:18.714596 kubelet[2749]: I0916 04:54:18.714426 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm5p6\" (UniqueName: \"kubernetes.io/projected/f5f9912e-f440-4935-b278-a8b4b136786d-kube-api-access-cm5p6\") pod \"goldmane-54d579b49d-725zd\" (UID: \"f5f9912e-f440-4935-b278-a8b4b136786d\") " pod="calico-system/goldmane-54d579b49d-725zd" Sep 16 04:54:18.714596 kubelet[2749]: I0916 04:54:18.714440 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrs8b\" (UniqueName: \"kubernetes.io/projected/f4cc43e3-456a-4c62-a062-12998d6b9b30-kube-api-access-mrs8b\") pod \"calico-apiserver-586c95c678-wshk4\" (UID: \"f4cc43e3-456a-4c62-a062-12998d6b9b30\") " pod="calico-apiserver/calico-apiserver-586c95c678-wshk4" Sep 16 04:54:18.714953 kubelet[2749]: I0916 04:54:18.714454 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6e8e3fa2-5116-467f-86f3-f0897fcfaddf-config-volume\") pod \"coredns-668d6bf9bc-c44xj\" (UID: \"6e8e3fa2-5116-467f-86f3-f0897fcfaddf\") " pod="kube-system/coredns-668d6bf9bc-c44xj" Sep 16 04:54:18.714953 kubelet[2749]: I0916 04:54:18.714467 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p7jk\" (UniqueName: \"kubernetes.io/projected/6e8e3fa2-5116-467f-86f3-f0897fcfaddf-kube-api-access-2p7jk\") pod \"coredns-668d6bf9bc-c44xj\" (UID: \"6e8e3fa2-5116-467f-86f3-f0897fcfaddf\") " pod="kube-system/coredns-668d6bf9bc-c44xj" Sep 16 04:54:18.714953 kubelet[2749]: I0916 04:54:18.714480 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64md7\" (UniqueName: \"kubernetes.io/projected/2cf15689-32b5-4778-b0c2-c93602dd5a17-kube-api-access-64md7\") pod \"calico-apiserver-586c95c678-xqkfq\" (UID: \"2cf15689-32b5-4778-b0c2-c93602dd5a17\") " pod="calico-apiserver/calico-apiserver-586c95c678-xqkfq" Sep 16 04:54:18.714953 kubelet[2749]: I0916 04:54:18.714759 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5f9912e-f440-4935-b278-a8b4b136786d-config\") pod \"goldmane-54d579b49d-725zd\" (UID: \"f5f9912e-f440-4935-b278-a8b4b136786d\") " pod="calico-system/goldmane-54d579b49d-725zd" Sep 16 04:54:18.714953 kubelet[2749]: I0916 04:54:18.714785 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9df7419-2dd3-4168-a272-4c33962cc46a-tigera-ca-bundle\") pod \"calico-kube-controllers-79f564b968-87w2g\" (UID: \"d9df7419-2dd3-4168-a272-4c33962cc46a\") " pod="calico-system/calico-kube-controllers-79f564b968-87w2g" Sep 16 04:54:18.715166 kubelet[2749]: I0916 04:54:18.714798 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7b14ac7-1066-42fb-b1ca-cd746b21268d-config-volume\") pod \"coredns-668d6bf9bc-q7fw9\" (UID: \"c7b14ac7-1066-42fb-b1ca-cd746b21268d\") " pod="kube-system/coredns-668d6bf9bc-q7fw9" Sep 16 04:54:18.715166 kubelet[2749]: I0916 04:54:18.714830 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm52d\" (UniqueName: \"kubernetes.io/projected/c7b14ac7-1066-42fb-b1ca-cd746b21268d-kube-api-access-xm52d\") pod \"coredns-668d6bf9bc-q7fw9\" (UID: \"c7b14ac7-1066-42fb-b1ca-cd746b21268d\") " pod="kube-system/coredns-668d6bf9bc-q7fw9" Sep 16 04:54:18.715166 kubelet[2749]: I0916 04:54:18.714863 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/f5f9912e-f440-4935-b278-a8b4b136786d-goldmane-key-pair\") pod \"goldmane-54d579b49d-725zd\" (UID: \"f5f9912e-f440-4935-b278-a8b4b136786d\") " pod="calico-system/goldmane-54d579b49d-725zd" Sep 16 04:54:18.936780 containerd[1569]: time="2025-09-16T04:54:18.936729446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d79b78d5c-4j8xt,Uid:c3b36834-9d58-46a1-ab2f-ae1eac82c896,Namespace:calico-system,Attempt:0,}" Sep 16 04:54:18.945823 containerd[1569]: time="2025-09-16T04:54:18.945759109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-725zd,Uid:f5f9912e-f440-4935-b278-a8b4b136786d,Namespace:calico-system,Attempt:0,}" Sep 16 04:54:18.956081 containerd[1569]: time="2025-09-16T04:54:18.956049816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c44xj,Uid:6e8e3fa2-5116-467f-86f3-f0897fcfaddf,Namespace:kube-system,Attempt:0,}" Sep 16 04:54:18.973436 containerd[1569]: time="2025-09-16T04:54:18.972161282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79f564b968-87w2g,Uid:d9df7419-2dd3-4168-a272-4c33962cc46a,Namespace:calico-system,Attempt:0,}" Sep 16 04:54:18.994793 containerd[1569]: time="2025-09-16T04:54:18.994759268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q7fw9,Uid:c7b14ac7-1066-42fb-b1ca-cd746b21268d,Namespace:kube-system,Attempt:0,}" Sep 16 04:54:19.163299 containerd[1569]: time="2025-09-16T04:54:19.163064667Z" level=error msg="Failed to destroy network for sandbox \"c291adb1b0374d1de105d136c451a93685d6810fae52f6dabe13f6c2f504be23\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:54:19.166337 systemd[1]: run-netns-cni\x2d45a6c192\x2d4858\x2d17b7\x2d7517\x2de5e217e54640.mount: Deactivated successfully. Sep 16 04:54:19.169816 containerd[1569]: time="2025-09-16T04:54:19.169773985Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c44xj,Uid:6e8e3fa2-5116-467f-86f3-f0897fcfaddf,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c291adb1b0374d1de105d136c451a93685d6810fae52f6dabe13f6c2f504be23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:54:19.170393 kubelet[2749]: E0916 04:54:19.170352 2749 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c291adb1b0374d1de105d136c451a93685d6810fae52f6dabe13f6c2f504be23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:54:19.170468 kubelet[2749]: E0916 04:54:19.170438 2749 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c291adb1b0374d1de105d136c451a93685d6810fae52f6dabe13f6c2f504be23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-c44xj" Sep 16 04:54:19.170523 kubelet[2749]: E0916 04:54:19.170473 2749 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c291adb1b0374d1de105d136c451a93685d6810fae52f6dabe13f6c2f504be23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-c44xj" Sep 16 04:54:19.170740 kubelet[2749]: E0916 04:54:19.170627 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-c44xj_kube-system(6e8e3fa2-5116-467f-86f3-f0897fcfaddf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-c44xj_kube-system(6e8e3fa2-5116-467f-86f3-f0897fcfaddf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c291adb1b0374d1de105d136c451a93685d6810fae52f6dabe13f6c2f504be23\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-c44xj" podUID="6e8e3fa2-5116-467f-86f3-f0897fcfaddf" Sep 16 04:54:19.180732 containerd[1569]: time="2025-09-16T04:54:19.180680928Z" level=error msg="Failed to destroy network for sandbox \"af9fcee28dcfca7318ebb0755b4aa9fd8cb3862b2e05651dcb185e1d2feed369\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:54:19.182978 systemd[1]: run-netns-cni\x2d5f6e5a42\x2d49b7\x2d00e3\x2df873\x2d655cc073c249.mount: Deactivated successfully. Sep 16 04:54:19.186527 containerd[1569]: time="2025-09-16T04:54:19.186431389Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q7fw9,Uid:c7b14ac7-1066-42fb-b1ca-cd746b21268d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"af9fcee28dcfca7318ebb0755b4aa9fd8cb3862b2e05651dcb185e1d2feed369\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:54:19.186911 kubelet[2749]: E0916 04:54:19.186824 2749 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af9fcee28dcfca7318ebb0755b4aa9fd8cb3862b2e05651dcb185e1d2feed369\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:54:19.187050 containerd[1569]: time="2025-09-16T04:54:19.187032321Z" level=error msg="Failed to destroy network for sandbox \"7036cebcdde5c88d6ea10d93f8b04cf490321e78c2bd5f912449e50616a21465\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:54:19.187345 kubelet[2749]: E0916 04:54:19.187125 2749 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af9fcee28dcfca7318ebb0755b4aa9fd8cb3862b2e05651dcb185e1d2feed369\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-q7fw9" Sep 16 04:54:19.187345 kubelet[2749]: E0916 04:54:19.187151 2749 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af9fcee28dcfca7318ebb0755b4aa9fd8cb3862b2e05651dcb185e1d2feed369\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-q7fw9" Sep 16 04:54:19.187345 kubelet[2749]: E0916 04:54:19.187190 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-q7fw9_kube-system(c7b14ac7-1066-42fb-b1ca-cd746b21268d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-q7fw9_kube-system(c7b14ac7-1066-42fb-b1ca-cd746b21268d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"af9fcee28dcfca7318ebb0755b4aa9fd8cb3862b2e05651dcb185e1d2feed369\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-q7fw9" podUID="c7b14ac7-1066-42fb-b1ca-cd746b21268d" Sep 16 04:54:19.189343 systemd[1]: run-netns-cni\x2d833bd6dc\x2d31f0\x2ddb8d\x2d7850\x2df7459f0a7bbb.mount: Deactivated successfully. Sep 16 04:54:19.192714 containerd[1569]: time="2025-09-16T04:54:19.192669078Z" level=error msg="Failed to destroy network for sandbox \"e8c6a68420352a1781391b3db7d6233f83fadf40743a2d9bca6469dc997548b9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:54:19.192987 containerd[1569]: time="2025-09-16T04:54:19.192825492Z" level=error msg="Failed to destroy network for sandbox \"21a7f96d385a0a93d67e796c1ab9a31c0a1f43ff2329d28e33b351e37ca242d7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:54:19.193171 containerd[1569]: time="2025-09-16T04:54:19.193133343Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-725zd,Uid:f5f9912e-f440-4935-b278-a8b4b136786d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7036cebcdde5c88d6ea10d93f8b04cf490321e78c2bd5f912449e50616a21465\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:54:19.193390 kubelet[2749]: E0916 04:54:19.193356 2749 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7036cebcdde5c88d6ea10d93f8b04cf490321e78c2bd5f912449e50616a21465\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:54:19.193440 kubelet[2749]: E0916 04:54:19.193405 2749 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7036cebcdde5c88d6ea10d93f8b04cf490321e78c2bd5f912449e50616a21465\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-725zd" Sep 16 04:54:19.193440 kubelet[2749]: E0916 04:54:19.193423 2749 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7036cebcdde5c88d6ea10d93f8b04cf490321e78c2bd5f912449e50616a21465\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-725zd" Sep 16 04:54:19.193481 kubelet[2749]: E0916 04:54:19.193461 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-725zd_calico-system(f5f9912e-f440-4935-b278-a8b4b136786d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-725zd_calico-system(f5f9912e-f440-4935-b278-a8b4b136786d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7036cebcdde5c88d6ea10d93f8b04cf490321e78c2bd5f912449e50616a21465\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-725zd" podUID="f5f9912e-f440-4935-b278-a8b4b136786d" Sep 16 04:54:19.194901 containerd[1569]: time="2025-09-16T04:54:19.194808708Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d79b78d5c-4j8xt,Uid:c3b36834-9d58-46a1-ab2f-ae1eac82c896,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8c6a68420352a1781391b3db7d6233f83fadf40743a2d9bca6469dc997548b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:54:19.195177 kubelet[2749]: E0916 04:54:19.195118 2749 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8c6a68420352a1781391b3db7d6233f83fadf40743a2d9bca6469dc997548b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:54:19.195339 kubelet[2749]: E0916 04:54:19.195296 2749 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8c6a68420352a1781391b3db7d6233f83fadf40743a2d9bca6469dc997548b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7d79b78d5c-4j8xt" Sep 16 04:54:19.195376 kubelet[2749]: E0916 04:54:19.195340 2749 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8c6a68420352a1781391b3db7d6233f83fadf40743a2d9bca6469dc997548b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7d79b78d5c-4j8xt" Sep 16 04:54:19.195430 kubelet[2749]: E0916 04:54:19.195374 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7d79b78d5c-4j8xt_calico-system(c3b36834-9d58-46a1-ab2f-ae1eac82c896)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7d79b78d5c-4j8xt_calico-system(c3b36834-9d58-46a1-ab2f-ae1eac82c896)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e8c6a68420352a1781391b3db7d6233f83fadf40743a2d9bca6469dc997548b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7d79b78d5c-4j8xt" podUID="c3b36834-9d58-46a1-ab2f-ae1eac82c896" Sep 16 04:54:19.196260 containerd[1569]: time="2025-09-16T04:54:19.196232971Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79f564b968-87w2g,Uid:d9df7419-2dd3-4168-a272-4c33962cc46a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"21a7f96d385a0a93d67e796c1ab9a31c0a1f43ff2329d28e33b351e37ca242d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:54:19.196808 kubelet[2749]: E0916 04:54:19.196422 2749 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21a7f96d385a0a93d67e796c1ab9a31c0a1f43ff2329d28e33b351e37ca242d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:54:19.196808 kubelet[2749]: E0916 04:54:19.196449 2749 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21a7f96d385a0a93d67e796c1ab9a31c0a1f43ff2329d28e33b351e37ca242d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79f564b968-87w2g" Sep 16 04:54:19.196808 kubelet[2749]: E0916 04:54:19.196461 2749 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21a7f96d385a0a93d67e796c1ab9a31c0a1f43ff2329d28e33b351e37ca242d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79f564b968-87w2g" Sep 16 04:54:19.196883 kubelet[2749]: E0916 04:54:19.196519 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-79f564b968-87w2g_calico-system(d9df7419-2dd3-4168-a272-4c33962cc46a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-79f564b968-87w2g_calico-system(d9df7419-2dd3-4168-a272-4c33962cc46a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"21a7f96d385a0a93d67e796c1ab9a31c0a1f43ff2329d28e33b351e37ca242d7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79f564b968-87w2g" podUID="d9df7419-2dd3-4168-a272-4c33962cc46a" Sep 16 04:54:19.412724 containerd[1569]: time="2025-09-16T04:54:19.412581937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 16 04:54:19.826723 kubelet[2749]: E0916 04:54:19.826667 2749 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Sep 16 04:54:19.826723 kubelet[2749]: E0916 04:54:19.826722 2749 projected.go:194] Error preparing data for projected volume kube-api-access-64md7 for pod calico-apiserver/calico-apiserver-586c95c678-xqkfq: failed to sync configmap cache: timed out waiting for the condition Sep 16 04:54:19.828806 kubelet[2749]: E0916 04:54:19.826806 2749 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2cf15689-32b5-4778-b0c2-c93602dd5a17-kube-api-access-64md7 podName:2cf15689-32b5-4778-b0c2-c93602dd5a17 nodeName:}" failed. No retries permitted until 2025-09-16 04:54:20.326782198 +0000 UTC m=+30.127833414 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-64md7" (UniqueName: "kubernetes.io/projected/2cf15689-32b5-4778-b0c2-c93602dd5a17-kube-api-access-64md7") pod "calico-apiserver-586c95c678-xqkfq" (UID: "2cf15689-32b5-4778-b0c2-c93602dd5a17") : failed to sync configmap cache: timed out waiting for the condition Sep 16 04:54:19.836522 kubelet[2749]: E0916 04:54:19.836441 2749 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Sep 16 04:54:19.836522 kubelet[2749]: E0916 04:54:19.836480 2749 projected.go:194] Error preparing data for projected volume kube-api-access-mrs8b for pod calico-apiserver/calico-apiserver-586c95c678-wshk4: failed to sync configmap cache: timed out waiting for the condition Sep 16 04:54:19.836709 kubelet[2749]: E0916 04:54:19.836555 2749 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f4cc43e3-456a-4c62-a062-12998d6b9b30-kube-api-access-mrs8b podName:f4cc43e3-456a-4c62-a062-12998d6b9b30 nodeName:}" failed. No retries permitted until 2025-09-16 04:54:20.33653636 +0000 UTC m=+30.137587576 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mrs8b" (UniqueName: "kubernetes.io/projected/f4cc43e3-456a-4c62-a062-12998d6b9b30-kube-api-access-mrs8b") pod "calico-apiserver-586c95c678-wshk4" (UID: "f4cc43e3-456a-4c62-a062-12998d6b9b30") : failed to sync configmap cache: timed out waiting for the condition Sep 16 04:54:20.004427 systemd[1]: run-netns-cni\x2df8a486bb\x2d4a53\x2d17cd\x2d9ecc\x2d52dde772fa5c.mount: Deactivated successfully. Sep 16 04:54:20.004614 systemd[1]: run-netns-cni\x2de65d361a\x2db3af\x2d1d83\x2d47f5\x2d3edfc4d7f3ec.mount: Deactivated successfully. Sep 16 04:54:20.312135 systemd[1]: Created slice kubepods-besteffort-podfb1bf5b8_6a81_4981_94ca_ffc8e11d2e06.slice - libcontainer container kubepods-besteffort-podfb1bf5b8_6a81_4981_94ca_ffc8e11d2e06.slice. Sep 16 04:54:20.314762 containerd[1569]: time="2025-09-16T04:54:20.314713979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9nfwp,Uid:fb1bf5b8-6a81-4981-94ca-ffc8e11d2e06,Namespace:calico-system,Attempt:0,}" Sep 16 04:54:20.374911 containerd[1569]: time="2025-09-16T04:54:20.374859613Z" level=error msg="Failed to destroy network for sandbox \"1e15cd62f932a21fb5119769f35a59c6f944427750904bc59cf6c8b5e0ba4e40\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:54:20.377354 systemd[1]: run-netns-cni\x2d68332e5a\x2de52e\x2dd954\x2d2fca\x2d28154a0e6c83.mount: Deactivated successfully. Sep 16 04:54:20.377813 containerd[1569]: time="2025-09-16T04:54:20.377761366Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9nfwp,Uid:fb1bf5b8-6a81-4981-94ca-ffc8e11d2e06,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e15cd62f932a21fb5119769f35a59c6f944427750904bc59cf6c8b5e0ba4e40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:54:20.378369 kubelet[2749]: E0916 04:54:20.378297 2749 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e15cd62f932a21fb5119769f35a59c6f944427750904bc59cf6c8b5e0ba4e40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:54:20.379314 kubelet[2749]: E0916 04:54:20.378423 2749 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e15cd62f932a21fb5119769f35a59c6f944427750904bc59cf6c8b5e0ba4e40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9nfwp" Sep 16 04:54:20.379314 kubelet[2749]: E0916 04:54:20.378452 2749 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e15cd62f932a21fb5119769f35a59c6f944427750904bc59cf6c8b5e0ba4e40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9nfwp" Sep 16 04:54:20.379314 kubelet[2749]: E0916 04:54:20.378629 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9nfwp_calico-system(fb1bf5b8-6a81-4981-94ca-ffc8e11d2e06)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9nfwp_calico-system(fb1bf5b8-6a81-4981-94ca-ffc8e11d2e06)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1e15cd62f932a21fb5119769f35a59c6f944427750904bc59cf6c8b5e0ba4e40\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9nfwp" podUID="fb1bf5b8-6a81-4981-94ca-ffc8e11d2e06" Sep 16 04:54:20.486529 containerd[1569]: time="2025-09-16T04:54:20.485804119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-586c95c678-wshk4,Uid:f4cc43e3-456a-4c62-a062-12998d6b9b30,Namespace:calico-apiserver,Attempt:0,}" Sep 16 04:54:20.494178 containerd[1569]: time="2025-09-16T04:54:20.494106523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-586c95c678-xqkfq,Uid:2cf15689-32b5-4778-b0c2-c93602dd5a17,Namespace:calico-apiserver,Attempt:0,}" Sep 16 04:54:20.545834 containerd[1569]: time="2025-09-16T04:54:20.545795532Z" level=error msg="Failed to destroy network for sandbox \"b97ffde0a7461c1779aa44f5411dfdfb060d8fef464c73c0533b251e17a8650b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:54:20.547346 containerd[1569]: time="2025-09-16T04:54:20.547288093Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-586c95c678-wshk4,Uid:f4cc43e3-456a-4c62-a062-12998d6b9b30,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b97ffde0a7461c1779aa44f5411dfdfb060d8fef464c73c0533b251e17a8650b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:54:20.547797 kubelet[2749]: E0916 04:54:20.547762 2749 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b97ffde0a7461c1779aa44f5411dfdfb060d8fef464c73c0533b251e17a8650b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:54:20.548005 kubelet[2749]: E0916 04:54:20.547987 2749 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b97ffde0a7461c1779aa44f5411dfdfb060d8fef464c73c0533b251e17a8650b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-586c95c678-wshk4" Sep 16 04:54:20.548084 kubelet[2749]: E0916 04:54:20.548068 2749 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b97ffde0a7461c1779aa44f5411dfdfb060d8fef464c73c0533b251e17a8650b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-586c95c678-wshk4" Sep 16 04:54:20.550978 kubelet[2749]: E0916 04:54:20.549550 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-586c95c678-wshk4_calico-apiserver(f4cc43e3-456a-4c62-a062-12998d6b9b30)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-586c95c678-wshk4_calico-apiserver(f4cc43e3-456a-4c62-a062-12998d6b9b30)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b97ffde0a7461c1779aa44f5411dfdfb060d8fef464c73c0533b251e17a8650b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-586c95c678-wshk4" podUID="f4cc43e3-456a-4c62-a062-12998d6b9b30" Sep 16 04:54:20.555364 kubelet[2749]: I0916 04:54:20.554531 2749 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 16 04:54:20.563461 containerd[1569]: time="2025-09-16T04:54:20.563094857Z" level=error msg="Failed to destroy network for sandbox \"048fe11585ae38d7ed9924ed18ae330eb1bdc64c9731385f97871b1602c61752\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:54:20.564588 containerd[1569]: time="2025-09-16T04:54:20.564486678Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-586c95c678-xqkfq,Uid:2cf15689-32b5-4778-b0c2-c93602dd5a17,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"048fe11585ae38d7ed9924ed18ae330eb1bdc64c9731385f97871b1602c61752\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:54:20.565356 kubelet[2749]: E0916 04:54:20.564776 2749 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"048fe11585ae38d7ed9924ed18ae330eb1bdc64c9731385f97871b1602c61752\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:54:20.565425 kubelet[2749]: E0916 04:54:20.565363 2749 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"048fe11585ae38d7ed9924ed18ae330eb1bdc64c9731385f97871b1602c61752\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-586c95c678-xqkfq" Sep 16 04:54:20.565425 kubelet[2749]: E0916 04:54:20.565383 2749 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"048fe11585ae38d7ed9924ed18ae330eb1bdc64c9731385f97871b1602c61752\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-586c95c678-xqkfq" Sep 16 04:54:20.566538 kubelet[2749]: E0916 04:54:20.565417 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-586c95c678-xqkfq_calico-apiserver(2cf15689-32b5-4778-b0c2-c93602dd5a17)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-586c95c678-xqkfq_calico-apiserver(2cf15689-32b5-4778-b0c2-c93602dd5a17)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"048fe11585ae38d7ed9924ed18ae330eb1bdc64c9731385f97871b1602c61752\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-586c95c678-xqkfq" podUID="2cf15689-32b5-4778-b0c2-c93602dd5a17" Sep 16 04:54:26.103058 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1930913319.mount: Deactivated successfully. Sep 16 04:54:26.145435 containerd[1569]: time="2025-09-16T04:54:26.145357869Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:26.147041 containerd[1569]: time="2025-09-16T04:54:26.145905389Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 16 04:54:26.148855 containerd[1569]: time="2025-09-16T04:54:26.148694183Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:26.156075 containerd[1569]: time="2025-09-16T04:54:26.155971408Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:26.156807 containerd[1569]: time="2025-09-16T04:54:26.156778055Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 6.744158757s" Sep 16 04:54:26.157652 containerd[1569]: time="2025-09-16T04:54:26.156812751Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 16 04:54:26.200524 containerd[1569]: time="2025-09-16T04:54:26.199394403Z" level=info msg="CreateContainer within sandbox \"5e38c5e2b91d5467f8859612b961d88c7bb11c3b0357a8cf82460e979afcf15c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 16 04:54:26.227520 containerd[1569]: time="2025-09-16T04:54:26.224648212Z" level=info msg="Container 56f4769d3a970d247123dd173b81d6cfa362dd19c8c3ad19a5a7c33fe4733e05: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:54:26.231720 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3332346449.mount: Deactivated successfully. Sep 16 04:54:26.248733 containerd[1569]: time="2025-09-16T04:54:26.248696945Z" level=info msg="CreateContainer within sandbox \"5e38c5e2b91d5467f8859612b961d88c7bb11c3b0357a8cf82460e979afcf15c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"56f4769d3a970d247123dd173b81d6cfa362dd19c8c3ad19a5a7c33fe4733e05\"" Sep 16 04:54:26.251700 containerd[1569]: time="2025-09-16T04:54:26.251669484Z" level=info msg="StartContainer for \"56f4769d3a970d247123dd173b81d6cfa362dd19c8c3ad19a5a7c33fe4733e05\"" Sep 16 04:54:26.253538 containerd[1569]: time="2025-09-16T04:54:26.253457656Z" level=info msg="connecting to shim 56f4769d3a970d247123dd173b81d6cfa362dd19c8c3ad19a5a7c33fe4733e05" address="unix:///run/containerd/s/06a3f212940dfa80d2cdc5644ce5feb380bb56bee31b685f033e7c2e9c9b84e6" protocol=ttrpc version=3 Sep 16 04:54:26.355638 systemd[1]: Started cri-containerd-56f4769d3a970d247123dd173b81d6cfa362dd19c8c3ad19a5a7c33fe4733e05.scope - libcontainer container 56f4769d3a970d247123dd173b81d6cfa362dd19c8c3ad19a5a7c33fe4733e05. Sep 16 04:54:26.423260 containerd[1569]: time="2025-09-16T04:54:26.423200751Z" level=info msg="StartContainer for \"56f4769d3a970d247123dd173b81d6cfa362dd19c8c3ad19a5a7c33fe4733e05\" returns successfully" Sep 16 04:54:26.490707 kubelet[2749]: I0916 04:54:26.490195 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-4kzqx" podStartSLOduration=1.293581287 podStartE2EDuration="17.488478613s" podCreationTimestamp="2025-09-16 04:54:09 +0000 UTC" firstStartedPulling="2025-09-16 04:54:09.962801821 +0000 UTC m=+19.763853027" lastFinishedPulling="2025-09-16 04:54:26.157699146 +0000 UTC m=+35.958750353" observedRunningTime="2025-09-16 04:54:26.487727841 +0000 UTC m=+36.288779048" watchObservedRunningTime="2025-09-16 04:54:26.488478613 +0000 UTC m=+36.289529819" Sep 16 04:54:26.498382 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 16 04:54:26.500072 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 16 04:54:26.770386 kubelet[2749]: I0916 04:54:26.769660 2749 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3b36834-9d58-46a1-ab2f-ae1eac82c896-whisker-ca-bundle\") pod \"c3b36834-9d58-46a1-ab2f-ae1eac82c896\" (UID: \"c3b36834-9d58-46a1-ab2f-ae1eac82c896\") " Sep 16 04:54:26.772698 kubelet[2749]: I0916 04:54:26.769968 2749 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3b36834-9d58-46a1-ab2f-ae1eac82c896-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "c3b36834-9d58-46a1-ab2f-ae1eac82c896" (UID: "c3b36834-9d58-46a1-ab2f-ae1eac82c896"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 16 04:54:26.772698 kubelet[2749]: I0916 04:54:26.770565 2749 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rcn5\" (UniqueName: \"kubernetes.io/projected/c3b36834-9d58-46a1-ab2f-ae1eac82c896-kube-api-access-7rcn5\") pod \"c3b36834-9d58-46a1-ab2f-ae1eac82c896\" (UID: \"c3b36834-9d58-46a1-ab2f-ae1eac82c896\") " Sep 16 04:54:26.772698 kubelet[2749]: I0916 04:54:26.772275 2749 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c3b36834-9d58-46a1-ab2f-ae1eac82c896-whisker-backend-key-pair\") pod \"c3b36834-9d58-46a1-ab2f-ae1eac82c896\" (UID: \"c3b36834-9d58-46a1-ab2f-ae1eac82c896\") " Sep 16 04:54:26.772698 kubelet[2749]: I0916 04:54:26.772358 2749 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3b36834-9d58-46a1-ab2f-ae1eac82c896-whisker-ca-bundle\") on node \"ci-4459-0-0-n-200d586c0a\" DevicePath \"\"" Sep 16 04:54:26.788529 kubelet[2749]: I0916 04:54:26.787016 2749 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3b36834-9d58-46a1-ab2f-ae1eac82c896-kube-api-access-7rcn5" (OuterVolumeSpecName: "kube-api-access-7rcn5") pod "c3b36834-9d58-46a1-ab2f-ae1eac82c896" (UID: "c3b36834-9d58-46a1-ab2f-ae1eac82c896"). InnerVolumeSpecName "kube-api-access-7rcn5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 16 04:54:26.788529 kubelet[2749]: I0916 04:54:26.787445 2749 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3b36834-9d58-46a1-ab2f-ae1eac82c896-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "c3b36834-9d58-46a1-ab2f-ae1eac82c896" (UID: "c3b36834-9d58-46a1-ab2f-ae1eac82c896"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 16 04:54:26.794707 containerd[1569]: time="2025-09-16T04:54:26.794654763Z" level=info msg="TaskExit event in podsandbox handler container_id:\"56f4769d3a970d247123dd173b81d6cfa362dd19c8c3ad19a5a7c33fe4733e05\" id:\"36df530c37c8bf8e81636d4fccfe48c55663a8a5f46cee357184a4c9ad6f8d99\" pid:3821 exit_status:1 exited_at:{seconds:1757998466 nanos:757213427}" Sep 16 04:54:26.873343 kubelet[2749]: I0916 04:54:26.873299 2749 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7rcn5\" (UniqueName: \"kubernetes.io/projected/c3b36834-9d58-46a1-ab2f-ae1eac82c896-kube-api-access-7rcn5\") on node \"ci-4459-0-0-n-200d586c0a\" DevicePath \"\"" Sep 16 04:54:26.873343 kubelet[2749]: I0916 04:54:26.873340 2749 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c3b36834-9d58-46a1-ab2f-ae1eac82c896-whisker-backend-key-pair\") on node \"ci-4459-0-0-n-200d586c0a\" DevicePath \"\"" Sep 16 04:54:27.098907 systemd[1]: var-lib-kubelet-pods-c3b36834\x2d9d58\x2d46a1\x2dab2f\x2dae1eac82c896-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7rcn5.mount: Deactivated successfully. Sep 16 04:54:27.099010 systemd[1]: var-lib-kubelet-pods-c3b36834\x2d9d58\x2d46a1\x2dab2f\x2dae1eac82c896-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 16 04:54:27.450056 systemd[1]: Removed slice kubepods-besteffort-podc3b36834_9d58_46a1_ab2f_ae1eac82c896.slice - libcontainer container kubepods-besteffort-podc3b36834_9d58_46a1_ab2f_ae1eac82c896.slice. Sep 16 04:54:27.521567 systemd[1]: Created slice kubepods-besteffort-pod1d48e85f_9120_4500_a393_b7b092e30fba.slice - libcontainer container kubepods-besteffort-pod1d48e85f_9120_4500_a393_b7b092e30fba.slice. Sep 16 04:54:27.598429 containerd[1569]: time="2025-09-16T04:54:27.598315683Z" level=info msg="TaskExit event in podsandbox handler container_id:\"56f4769d3a970d247123dd173b81d6cfa362dd19c8c3ad19a5a7c33fe4733e05\" id:\"337c4758c9638aba78783c230647de7089f37c47293a4a621470050f2b8e4e68\" pid:3868 exit_status:1 exited_at:{seconds:1757998467 nanos:597838305}" Sep 16 04:54:27.678986 kubelet[2749]: I0916 04:54:27.678909 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5txv9\" (UniqueName: \"kubernetes.io/projected/1d48e85f-9120-4500-a393-b7b092e30fba-kube-api-access-5txv9\") pod \"whisker-bd849688-f577h\" (UID: \"1d48e85f-9120-4500-a393-b7b092e30fba\") " pod="calico-system/whisker-bd849688-f577h" Sep 16 04:54:27.678986 kubelet[2749]: I0916 04:54:27.678976 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d48e85f-9120-4500-a393-b7b092e30fba-whisker-ca-bundle\") pod \"whisker-bd849688-f577h\" (UID: \"1d48e85f-9120-4500-a393-b7b092e30fba\") " pod="calico-system/whisker-bd849688-f577h" Sep 16 04:54:27.678986 kubelet[2749]: I0916 04:54:27.679000 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1d48e85f-9120-4500-a393-b7b092e30fba-whisker-backend-key-pair\") pod \"whisker-bd849688-f577h\" (UID: \"1d48e85f-9120-4500-a393-b7b092e30fba\") " pod="calico-system/whisker-bd849688-f577h" Sep 16 04:54:27.831037 containerd[1569]: time="2025-09-16T04:54:27.830919777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bd849688-f577h,Uid:1d48e85f-9120-4500-a393-b7b092e30fba,Namespace:calico-system,Attempt:0,}" Sep 16 04:54:28.241464 systemd-networkd[1479]: cali2a36e5f91ff: Link UP Sep 16 04:54:28.242639 systemd-networkd[1479]: cali2a36e5f91ff: Gained carrier Sep 16 04:54:28.274821 containerd[1569]: 2025-09-16 04:54:27.944 [INFO][3898] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 16 04:54:28.274821 containerd[1569]: 2025-09-16 04:54:27.980 [INFO][3898] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--0--0--n--200d586c0a-k8s-whisker--bd849688--f577h-eth0 whisker-bd849688- calico-system 1d48e85f-9120-4500-a393-b7b092e30fba 863 0 2025-09-16 04:54:27 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:bd849688 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459-0-0-n-200d586c0a whisker-bd849688-f577h eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali2a36e5f91ff [] [] }} ContainerID="6b6a9a396e5d8eda024d24b39ae3c2dfb7a3bcaf9f42c4c751a77428078c017c" Namespace="calico-system" Pod="whisker-bd849688-f577h" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-whisker--bd849688--f577h-" Sep 16 04:54:28.274821 containerd[1569]: 2025-09-16 04:54:27.980 [INFO][3898] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6b6a9a396e5d8eda024d24b39ae3c2dfb7a3bcaf9f42c4c751a77428078c017c" Namespace="calico-system" Pod="whisker-bd849688-f577h" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-whisker--bd849688--f577h-eth0" Sep 16 04:54:28.274821 containerd[1569]: 2025-09-16 04:54:28.172 [INFO][3973] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6b6a9a396e5d8eda024d24b39ae3c2dfb7a3bcaf9f42c4c751a77428078c017c" HandleID="k8s-pod-network.6b6a9a396e5d8eda024d24b39ae3c2dfb7a3bcaf9f42c4c751a77428078c017c" Workload="ci--4459--0--0--n--200d586c0a-k8s-whisker--bd849688--f577h-eth0" Sep 16 04:54:28.275228 containerd[1569]: 2025-09-16 04:54:28.174 [INFO][3973] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6b6a9a396e5d8eda024d24b39ae3c2dfb7a3bcaf9f42c4c751a77428078c017c" HandleID="k8s-pod-network.6b6a9a396e5d8eda024d24b39ae3c2dfb7a3bcaf9f42c4c751a77428078c017c" Workload="ci--4459--0--0--n--200d586c0a-k8s-whisker--bd849688--f577h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f860), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-0-0-n-200d586c0a", "pod":"whisker-bd849688-f577h", "timestamp":"2025-09-16 04:54:28.172607667 +0000 UTC"}, Hostname:"ci-4459-0-0-n-200d586c0a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:54:28.275228 containerd[1569]: 2025-09-16 04:54:28.175 [INFO][3973] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:54:28.275228 containerd[1569]: 2025-09-16 04:54:28.175 [INFO][3973] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:54:28.275228 containerd[1569]: 2025-09-16 04:54:28.175 [INFO][3973] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-0-0-n-200d586c0a' Sep 16 04:54:28.275228 containerd[1569]: 2025-09-16 04:54:28.192 [INFO][3973] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6b6a9a396e5d8eda024d24b39ae3c2dfb7a3bcaf9f42c4c751a77428078c017c" host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:28.275228 containerd[1569]: 2025-09-16 04:54:28.201 [INFO][3973] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:28.275228 containerd[1569]: 2025-09-16 04:54:28.206 [INFO][3973] ipam/ipam.go 511: Trying affinity for 192.168.101.128/26 host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:28.275228 containerd[1569]: 2025-09-16 04:54:28.208 [INFO][3973] ipam/ipam.go 158: Attempting to load block cidr=192.168.101.128/26 host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:28.275228 containerd[1569]: 2025-09-16 04:54:28.211 [INFO][3973] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.101.128/26 host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:28.275391 containerd[1569]: 2025-09-16 04:54:28.211 [INFO][3973] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.101.128/26 handle="k8s-pod-network.6b6a9a396e5d8eda024d24b39ae3c2dfb7a3bcaf9f42c4c751a77428078c017c" host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:28.275391 containerd[1569]: 2025-09-16 04:54:28.213 [INFO][3973] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6b6a9a396e5d8eda024d24b39ae3c2dfb7a3bcaf9f42c4c751a77428078c017c Sep 16 04:54:28.275391 containerd[1569]: 2025-09-16 04:54:28.218 [INFO][3973] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.101.128/26 handle="k8s-pod-network.6b6a9a396e5d8eda024d24b39ae3c2dfb7a3bcaf9f42c4c751a77428078c017c" host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:28.275391 containerd[1569]: 2025-09-16 04:54:28.224 [INFO][3973] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.101.129/26] block=192.168.101.128/26 handle="k8s-pod-network.6b6a9a396e5d8eda024d24b39ae3c2dfb7a3bcaf9f42c4c751a77428078c017c" host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:28.275391 containerd[1569]: 2025-09-16 04:54:28.224 [INFO][3973] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.101.129/26] handle="k8s-pod-network.6b6a9a396e5d8eda024d24b39ae3c2dfb7a3bcaf9f42c4c751a77428078c017c" host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:28.275391 containerd[1569]: 2025-09-16 04:54:28.224 [INFO][3973] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:54:28.275391 containerd[1569]: 2025-09-16 04:54:28.224 [INFO][3973] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.101.129/26] IPv6=[] ContainerID="6b6a9a396e5d8eda024d24b39ae3c2dfb7a3bcaf9f42c4c751a77428078c017c" HandleID="k8s-pod-network.6b6a9a396e5d8eda024d24b39ae3c2dfb7a3bcaf9f42c4c751a77428078c017c" Workload="ci--4459--0--0--n--200d586c0a-k8s-whisker--bd849688--f577h-eth0" Sep 16 04:54:28.276247 containerd[1569]: 2025-09-16 04:54:28.227 [INFO][3898] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6b6a9a396e5d8eda024d24b39ae3c2dfb7a3bcaf9f42c4c751a77428078c017c" Namespace="calico-system" Pod="whisker-bd849688-f577h" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-whisker--bd849688--f577h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--0--0--n--200d586c0a-k8s-whisker--bd849688--f577h-eth0", GenerateName:"whisker-bd849688-", Namespace:"calico-system", SelfLink:"", UID:"1d48e85f-9120-4500-a393-b7b092e30fba", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 54, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"bd849688", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-0-0-n-200d586c0a", ContainerID:"", Pod:"whisker-bd849688-f577h", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.101.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2a36e5f91ff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:54:28.276247 containerd[1569]: 2025-09-16 04:54:28.227 [INFO][3898] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.101.129/32] ContainerID="6b6a9a396e5d8eda024d24b39ae3c2dfb7a3bcaf9f42c4c751a77428078c017c" Namespace="calico-system" Pod="whisker-bd849688-f577h" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-whisker--bd849688--f577h-eth0" Sep 16 04:54:28.276349 containerd[1569]: 2025-09-16 04:54:28.227 [INFO][3898] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2a36e5f91ff ContainerID="6b6a9a396e5d8eda024d24b39ae3c2dfb7a3bcaf9f42c4c751a77428078c017c" Namespace="calico-system" Pod="whisker-bd849688-f577h" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-whisker--bd849688--f577h-eth0" Sep 16 04:54:28.276349 containerd[1569]: 2025-09-16 04:54:28.243 [INFO][3898] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6b6a9a396e5d8eda024d24b39ae3c2dfb7a3bcaf9f42c4c751a77428078c017c" Namespace="calico-system" Pod="whisker-bd849688-f577h" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-whisker--bd849688--f577h-eth0" Sep 16 04:54:28.276412 containerd[1569]: 2025-09-16 04:54:28.243 [INFO][3898] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6b6a9a396e5d8eda024d24b39ae3c2dfb7a3bcaf9f42c4c751a77428078c017c" Namespace="calico-system" Pod="whisker-bd849688-f577h" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-whisker--bd849688--f577h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--0--0--n--200d586c0a-k8s-whisker--bd849688--f577h-eth0", GenerateName:"whisker-bd849688-", Namespace:"calico-system", SelfLink:"", UID:"1d48e85f-9120-4500-a393-b7b092e30fba", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 54, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"bd849688", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-0-0-n-200d586c0a", ContainerID:"6b6a9a396e5d8eda024d24b39ae3c2dfb7a3bcaf9f42c4c751a77428078c017c", Pod:"whisker-bd849688-f577h", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.101.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2a36e5f91ff", MAC:"96:2c:69:ea:d4:b1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:54:28.276482 containerd[1569]: 2025-09-16 04:54:28.262 [INFO][3898] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6b6a9a396e5d8eda024d24b39ae3c2dfb7a3bcaf9f42c4c751a77428078c017c" Namespace="calico-system" Pod="whisker-bd849688-f577h" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-whisker--bd849688--f577h-eth0" Sep 16 04:54:28.310553 kubelet[2749]: I0916 04:54:28.310269 2749 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3b36834-9d58-46a1-ab2f-ae1eac82c896" path="/var/lib/kubelet/pods/c3b36834-9d58-46a1-ab2f-ae1eac82c896/volumes" Sep 16 04:54:28.432663 containerd[1569]: time="2025-09-16T04:54:28.432607827Z" level=info msg="connecting to shim 6b6a9a396e5d8eda024d24b39ae3c2dfb7a3bcaf9f42c4c751a77428078c017c" address="unix:///run/containerd/s/16f7d8da41a7cd2c2f2d119f58d53188d8b628341cc697b40c33d3999ee8e862" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:54:28.468877 systemd[1]: Started cri-containerd-6b6a9a396e5d8eda024d24b39ae3c2dfb7a3bcaf9f42c4c751a77428078c017c.scope - libcontainer container 6b6a9a396e5d8eda024d24b39ae3c2dfb7a3bcaf9f42c4c751a77428078c017c. Sep 16 04:54:28.552976 containerd[1569]: time="2025-09-16T04:54:28.552935761Z" level=info msg="TaskExit event in podsandbox handler container_id:\"56f4769d3a970d247123dd173b81d6cfa362dd19c8c3ad19a5a7c33fe4733e05\" id:\"5ec8b4796648f27ff010f691440fa433a468b4c9e5a461a14988027b9bcd8cc9\" pid:4070 exit_status:1 exited_at:{seconds:1757998468 nanos:552268297}" Sep 16 04:54:28.567394 containerd[1569]: time="2025-09-16T04:54:28.567348274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bd849688-f577h,Uid:1d48e85f-9120-4500-a393-b7b092e30fba,Namespace:calico-system,Attempt:0,} returns sandbox id \"6b6a9a396e5d8eda024d24b39ae3c2dfb7a3bcaf9f42c4c751a77428078c017c\"" Sep 16 04:54:28.570743 containerd[1569]: time="2025-09-16T04:54:28.570714672Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 16 04:54:28.690314 systemd-networkd[1479]: vxlan.calico: Link UP Sep 16 04:54:28.691092 systemd-networkd[1479]: vxlan.calico: Gained carrier Sep 16 04:54:29.719794 systemd-networkd[1479]: cali2a36e5f91ff: Gained IPv6LL Sep 16 04:54:30.424637 systemd-networkd[1479]: vxlan.calico: Gained IPv6LL Sep 16 04:54:30.528860 containerd[1569]: time="2025-09-16T04:54:30.528799400Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:30.529880 containerd[1569]: time="2025-09-16T04:54:30.529765376Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 16 04:54:30.530710 containerd[1569]: time="2025-09-16T04:54:30.530679564Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:30.532793 containerd[1569]: time="2025-09-16T04:54:30.532770813Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:30.533185 containerd[1569]: time="2025-09-16T04:54:30.533148183Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.962405499s" Sep 16 04:54:30.533230 containerd[1569]: time="2025-09-16T04:54:30.533188088Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 16 04:54:30.536981 containerd[1569]: time="2025-09-16T04:54:30.536944868Z" level=info msg="CreateContainer within sandbox \"6b6a9a396e5d8eda024d24b39ae3c2dfb7a3bcaf9f42c4c751a77428078c017c\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 16 04:54:30.544691 containerd[1569]: time="2025-09-16T04:54:30.544654035Z" level=info msg="Container fd34ece0877e932d4a02665f2dee9d7eaaf9227e9a0de2bf17c224c9096a0f1d: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:54:30.553948 containerd[1569]: time="2025-09-16T04:54:30.553908817Z" level=info msg="CreateContainer within sandbox \"6b6a9a396e5d8eda024d24b39ae3c2dfb7a3bcaf9f42c4c751a77428078c017c\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"fd34ece0877e932d4a02665f2dee9d7eaaf9227e9a0de2bf17c224c9096a0f1d\"" Sep 16 04:54:30.555077 containerd[1569]: time="2025-09-16T04:54:30.554423013Z" level=info msg="StartContainer for \"fd34ece0877e932d4a02665f2dee9d7eaaf9227e9a0de2bf17c224c9096a0f1d\"" Sep 16 04:54:30.555536 containerd[1569]: time="2025-09-16T04:54:30.555511068Z" level=info msg="connecting to shim fd34ece0877e932d4a02665f2dee9d7eaaf9227e9a0de2bf17c224c9096a0f1d" address="unix:///run/containerd/s/16f7d8da41a7cd2c2f2d119f58d53188d8b628341cc697b40c33d3999ee8e862" protocol=ttrpc version=3 Sep 16 04:54:30.584633 systemd[1]: Started cri-containerd-fd34ece0877e932d4a02665f2dee9d7eaaf9227e9a0de2bf17c224c9096a0f1d.scope - libcontainer container fd34ece0877e932d4a02665f2dee9d7eaaf9227e9a0de2bf17c224c9096a0f1d. Sep 16 04:54:30.638074 containerd[1569]: time="2025-09-16T04:54:30.638028338Z" level=info msg="StartContainer for \"fd34ece0877e932d4a02665f2dee9d7eaaf9227e9a0de2bf17c224c9096a0f1d\" returns successfully" Sep 16 04:54:30.640747 containerd[1569]: time="2025-09-16T04:54:30.640672657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 16 04:54:31.305327 containerd[1569]: time="2025-09-16T04:54:31.304681830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-725zd,Uid:f5f9912e-f440-4935-b278-a8b4b136786d,Namespace:calico-system,Attempt:0,}" Sep 16 04:54:31.305327 containerd[1569]: time="2025-09-16T04:54:31.304759215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-586c95c678-wshk4,Uid:f4cc43e3-456a-4c62-a062-12998d6b9b30,Namespace:calico-apiserver,Attempt:0,}" Sep 16 04:54:31.305746 containerd[1569]: time="2025-09-16T04:54:31.305461726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c44xj,Uid:6e8e3fa2-5116-467f-86f3-f0897fcfaddf,Namespace:kube-system,Attempt:0,}" Sep 16 04:54:31.516252 systemd-networkd[1479]: cali3167b8891da: Link UP Sep 16 04:54:31.518137 systemd-networkd[1479]: cali3167b8891da: Gained carrier Sep 16 04:54:31.531547 containerd[1569]: 2025-09-16 04:54:31.416 [INFO][4220] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--0--0--n--200d586c0a-k8s-coredns--668d6bf9bc--c44xj-eth0 coredns-668d6bf9bc- kube-system 6e8e3fa2-5116-467f-86f3-f0897fcfaddf 783 0 2025-09-16 04:53:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-0-0-n-200d586c0a coredns-668d6bf9bc-c44xj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3167b8891da [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="95b4f7c10b1849a5354ba9cfc0f5db530ed87d6be092fbd35608c1bde7c7e050" Namespace="kube-system" Pod="coredns-668d6bf9bc-c44xj" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-coredns--668d6bf9bc--c44xj-" Sep 16 04:54:31.531547 containerd[1569]: 2025-09-16 04:54:31.416 [INFO][4220] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="95b4f7c10b1849a5354ba9cfc0f5db530ed87d6be092fbd35608c1bde7c7e050" Namespace="kube-system" Pod="coredns-668d6bf9bc-c44xj" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-coredns--668d6bf9bc--c44xj-eth0" Sep 16 04:54:31.531547 containerd[1569]: 2025-09-16 04:54:31.469 [INFO][4245] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="95b4f7c10b1849a5354ba9cfc0f5db530ed87d6be092fbd35608c1bde7c7e050" HandleID="k8s-pod-network.95b4f7c10b1849a5354ba9cfc0f5db530ed87d6be092fbd35608c1bde7c7e050" Workload="ci--4459--0--0--n--200d586c0a-k8s-coredns--668d6bf9bc--c44xj-eth0" Sep 16 04:54:31.532414 containerd[1569]: 2025-09-16 04:54:31.469 [INFO][4245] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="95b4f7c10b1849a5354ba9cfc0f5db530ed87d6be092fbd35608c1bde7c7e050" HandleID="k8s-pod-network.95b4f7c10b1849a5354ba9cfc0f5db530ed87d6be092fbd35608c1bde7c7e050" Workload="ci--4459--0--0--n--200d586c0a-k8s-coredns--668d6bf9bc--c44xj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-0-0-n-200d586c0a", "pod":"coredns-668d6bf9bc-c44xj", "timestamp":"2025-09-16 04:54:31.469547439 +0000 UTC"}, Hostname:"ci-4459-0-0-n-200d586c0a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:54:31.532414 containerd[1569]: 2025-09-16 04:54:31.469 [INFO][4245] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:54:31.532414 containerd[1569]: 2025-09-16 04:54:31.469 [INFO][4245] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:54:31.532414 containerd[1569]: 2025-09-16 04:54:31.469 [INFO][4245] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-0-0-n-200d586c0a' Sep 16 04:54:31.532414 containerd[1569]: 2025-09-16 04:54:31.479 [INFO][4245] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.95b4f7c10b1849a5354ba9cfc0f5db530ed87d6be092fbd35608c1bde7c7e050" host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:31.532414 containerd[1569]: 2025-09-16 04:54:31.484 [INFO][4245] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:31.532414 containerd[1569]: 2025-09-16 04:54:31.488 [INFO][4245] ipam/ipam.go 511: Trying affinity for 192.168.101.128/26 host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:31.532414 containerd[1569]: 2025-09-16 04:54:31.490 [INFO][4245] ipam/ipam.go 158: Attempting to load block cidr=192.168.101.128/26 host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:31.532414 containerd[1569]: 2025-09-16 04:54:31.492 [INFO][4245] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.101.128/26 host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:31.532912 containerd[1569]: 2025-09-16 04:54:31.492 [INFO][4245] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.101.128/26 handle="k8s-pod-network.95b4f7c10b1849a5354ba9cfc0f5db530ed87d6be092fbd35608c1bde7c7e050" host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:31.532912 containerd[1569]: 2025-09-16 04:54:31.493 [INFO][4245] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.95b4f7c10b1849a5354ba9cfc0f5db530ed87d6be092fbd35608c1bde7c7e050 Sep 16 04:54:31.532912 containerd[1569]: 2025-09-16 04:54:31.498 [INFO][4245] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.101.128/26 handle="k8s-pod-network.95b4f7c10b1849a5354ba9cfc0f5db530ed87d6be092fbd35608c1bde7c7e050" host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:31.532912 containerd[1569]: 2025-09-16 04:54:31.503 [INFO][4245] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.101.130/26] block=192.168.101.128/26 handle="k8s-pod-network.95b4f7c10b1849a5354ba9cfc0f5db530ed87d6be092fbd35608c1bde7c7e050" host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:31.532912 containerd[1569]: 2025-09-16 04:54:31.503 [INFO][4245] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.101.130/26] handle="k8s-pod-network.95b4f7c10b1849a5354ba9cfc0f5db530ed87d6be092fbd35608c1bde7c7e050" host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:31.532912 containerd[1569]: 2025-09-16 04:54:31.503 [INFO][4245] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:54:31.532912 containerd[1569]: 2025-09-16 04:54:31.503 [INFO][4245] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.101.130/26] IPv6=[] ContainerID="95b4f7c10b1849a5354ba9cfc0f5db530ed87d6be092fbd35608c1bde7c7e050" HandleID="k8s-pod-network.95b4f7c10b1849a5354ba9cfc0f5db530ed87d6be092fbd35608c1bde7c7e050" Workload="ci--4459--0--0--n--200d586c0a-k8s-coredns--668d6bf9bc--c44xj-eth0" Sep 16 04:54:31.533082 containerd[1569]: 2025-09-16 04:54:31.509 [INFO][4220] cni-plugin/k8s.go 418: Populated endpoint ContainerID="95b4f7c10b1849a5354ba9cfc0f5db530ed87d6be092fbd35608c1bde7c7e050" Namespace="kube-system" Pod="coredns-668d6bf9bc-c44xj" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-coredns--668d6bf9bc--c44xj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--0--0--n--200d586c0a-k8s-coredns--668d6bf9bc--c44xj-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6e8e3fa2-5116-467f-86f3-f0897fcfaddf", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 53, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-0-0-n-200d586c0a", ContainerID:"", Pod:"coredns-668d6bf9bc-c44xj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.101.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3167b8891da", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:54:31.533082 containerd[1569]: 2025-09-16 04:54:31.510 [INFO][4220] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.101.130/32] ContainerID="95b4f7c10b1849a5354ba9cfc0f5db530ed87d6be092fbd35608c1bde7c7e050" Namespace="kube-system" Pod="coredns-668d6bf9bc-c44xj" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-coredns--668d6bf9bc--c44xj-eth0" Sep 16 04:54:31.533082 containerd[1569]: 2025-09-16 04:54:31.511 [INFO][4220] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3167b8891da ContainerID="95b4f7c10b1849a5354ba9cfc0f5db530ed87d6be092fbd35608c1bde7c7e050" Namespace="kube-system" Pod="coredns-668d6bf9bc-c44xj" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-coredns--668d6bf9bc--c44xj-eth0" Sep 16 04:54:31.533082 containerd[1569]: 2025-09-16 04:54:31.518 [INFO][4220] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="95b4f7c10b1849a5354ba9cfc0f5db530ed87d6be092fbd35608c1bde7c7e050" Namespace="kube-system" Pod="coredns-668d6bf9bc-c44xj" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-coredns--668d6bf9bc--c44xj-eth0" Sep 16 04:54:31.533082 containerd[1569]: 2025-09-16 04:54:31.518 [INFO][4220] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="95b4f7c10b1849a5354ba9cfc0f5db530ed87d6be092fbd35608c1bde7c7e050" Namespace="kube-system" Pod="coredns-668d6bf9bc-c44xj" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-coredns--668d6bf9bc--c44xj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--0--0--n--200d586c0a-k8s-coredns--668d6bf9bc--c44xj-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6e8e3fa2-5116-467f-86f3-f0897fcfaddf", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 53, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-0-0-n-200d586c0a", ContainerID:"95b4f7c10b1849a5354ba9cfc0f5db530ed87d6be092fbd35608c1bde7c7e050", Pod:"coredns-668d6bf9bc-c44xj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.101.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3167b8891da", MAC:"3a:e4:8f:ed:ba:36", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:54:31.533082 containerd[1569]: 2025-09-16 04:54:31.528 [INFO][4220] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="95b4f7c10b1849a5354ba9cfc0f5db530ed87d6be092fbd35608c1bde7c7e050" Namespace="kube-system" Pod="coredns-668d6bf9bc-c44xj" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-coredns--668d6bf9bc--c44xj-eth0" Sep 16 04:54:31.554355 containerd[1569]: time="2025-09-16T04:54:31.554314333Z" level=info msg="connecting to shim 95b4f7c10b1849a5354ba9cfc0f5db530ed87d6be092fbd35608c1bde7c7e050" address="unix:///run/containerd/s/2a4bd2146fd1205871f52c786fb2f661a499f1749084552a4f6dcdd47270ef00" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:54:31.577624 systemd[1]: Started cri-containerd-95b4f7c10b1849a5354ba9cfc0f5db530ed87d6be092fbd35608c1bde7c7e050.scope - libcontainer container 95b4f7c10b1849a5354ba9cfc0f5db530ed87d6be092fbd35608c1bde7c7e050. Sep 16 04:54:31.624049 systemd-networkd[1479]: calieb9db5e66db: Link UP Sep 16 04:54:31.625687 systemd-networkd[1479]: calieb9db5e66db: Gained carrier Sep 16 04:54:31.635685 containerd[1569]: time="2025-09-16T04:54:31.635636917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c44xj,Uid:6e8e3fa2-5116-467f-86f3-f0897fcfaddf,Namespace:kube-system,Attempt:0,} returns sandbox id \"95b4f7c10b1849a5354ba9cfc0f5db530ed87d6be092fbd35608c1bde7c7e050\"" Sep 16 04:54:31.638834 containerd[1569]: time="2025-09-16T04:54:31.638809098Z" level=info msg="CreateContainer within sandbox \"95b4f7c10b1849a5354ba9cfc0f5db530ed87d6be092fbd35608c1bde7c7e050\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 16 04:54:31.645937 containerd[1569]: 2025-09-16 04:54:31.422 [INFO][4214] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--0--0--n--200d586c0a-k8s-calico--apiserver--586c95c678--wshk4-eth0 calico-apiserver-586c95c678- calico-apiserver f4cc43e3-456a-4c62-a062-12998d6b9b30 785 0 2025-09-16 04:54:06 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:586c95c678 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-0-0-n-200d586c0a calico-apiserver-586c95c678-wshk4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calieb9db5e66db [] [] }} ContainerID="9a8f6ac56ba90975fb85b32a674547e575e7028417f50fa3d997471e226025c8" Namespace="calico-apiserver" Pod="calico-apiserver-586c95c678-wshk4" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-calico--apiserver--586c95c678--wshk4-" Sep 16 04:54:31.645937 containerd[1569]: 2025-09-16 04:54:31.422 [INFO][4214] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9a8f6ac56ba90975fb85b32a674547e575e7028417f50fa3d997471e226025c8" Namespace="calico-apiserver" Pod="calico-apiserver-586c95c678-wshk4" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-calico--apiserver--586c95c678--wshk4-eth0" Sep 16 04:54:31.645937 containerd[1569]: 2025-09-16 04:54:31.476 [INFO][4252] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9a8f6ac56ba90975fb85b32a674547e575e7028417f50fa3d997471e226025c8" HandleID="k8s-pod-network.9a8f6ac56ba90975fb85b32a674547e575e7028417f50fa3d997471e226025c8" Workload="ci--4459--0--0--n--200d586c0a-k8s-calico--apiserver--586c95c678--wshk4-eth0" Sep 16 04:54:31.645937 containerd[1569]: 2025-09-16 04:54:31.476 [INFO][4252] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9a8f6ac56ba90975fb85b32a674547e575e7028417f50fa3d997471e226025c8" HandleID="k8s-pod-network.9a8f6ac56ba90975fb85b32a674547e575e7028417f50fa3d997471e226025c8" Workload="ci--4459--0--0--n--200d586c0a-k8s-calico--apiserver--586c95c678--wshk4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000332640), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459-0-0-n-200d586c0a", "pod":"calico-apiserver-586c95c678-wshk4", "timestamp":"2025-09-16 04:54:31.476797411 +0000 UTC"}, Hostname:"ci-4459-0-0-n-200d586c0a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:54:31.645937 containerd[1569]: 2025-09-16 04:54:31.477 [INFO][4252] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:54:31.645937 containerd[1569]: 2025-09-16 04:54:31.503 [INFO][4252] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:54:31.645937 containerd[1569]: 2025-09-16 04:54:31.503 [INFO][4252] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-0-0-n-200d586c0a' Sep 16 04:54:31.645937 containerd[1569]: 2025-09-16 04:54:31.580 [INFO][4252] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9a8f6ac56ba90975fb85b32a674547e575e7028417f50fa3d997471e226025c8" host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:31.645937 containerd[1569]: 2025-09-16 04:54:31.585 [INFO][4252] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:31.645937 containerd[1569]: 2025-09-16 04:54:31.592 [INFO][4252] ipam/ipam.go 511: Trying affinity for 192.168.101.128/26 host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:31.645937 containerd[1569]: 2025-09-16 04:54:31.595 [INFO][4252] ipam/ipam.go 158: Attempting to load block cidr=192.168.101.128/26 host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:31.645937 containerd[1569]: 2025-09-16 04:54:31.597 [INFO][4252] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.101.128/26 host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:31.645937 containerd[1569]: 2025-09-16 04:54:31.597 [INFO][4252] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.101.128/26 handle="k8s-pod-network.9a8f6ac56ba90975fb85b32a674547e575e7028417f50fa3d997471e226025c8" host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:31.645937 containerd[1569]: 2025-09-16 04:54:31.598 [INFO][4252] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9a8f6ac56ba90975fb85b32a674547e575e7028417f50fa3d997471e226025c8 Sep 16 04:54:31.645937 containerd[1569]: 2025-09-16 04:54:31.602 [INFO][4252] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.101.128/26 handle="k8s-pod-network.9a8f6ac56ba90975fb85b32a674547e575e7028417f50fa3d997471e226025c8" host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:31.645937 containerd[1569]: 2025-09-16 04:54:31.607 [INFO][4252] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.101.131/26] block=192.168.101.128/26 handle="k8s-pod-network.9a8f6ac56ba90975fb85b32a674547e575e7028417f50fa3d997471e226025c8" host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:31.645937 containerd[1569]: 2025-09-16 04:54:31.607 [INFO][4252] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.101.131/26] handle="k8s-pod-network.9a8f6ac56ba90975fb85b32a674547e575e7028417f50fa3d997471e226025c8" host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:31.645937 containerd[1569]: 2025-09-16 04:54:31.607 [INFO][4252] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:54:31.645937 containerd[1569]: 2025-09-16 04:54:31.608 [INFO][4252] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.101.131/26] IPv6=[] ContainerID="9a8f6ac56ba90975fb85b32a674547e575e7028417f50fa3d997471e226025c8" HandleID="k8s-pod-network.9a8f6ac56ba90975fb85b32a674547e575e7028417f50fa3d997471e226025c8" Workload="ci--4459--0--0--n--200d586c0a-k8s-calico--apiserver--586c95c678--wshk4-eth0" Sep 16 04:54:31.647244 containerd[1569]: 2025-09-16 04:54:31.610 [INFO][4214] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9a8f6ac56ba90975fb85b32a674547e575e7028417f50fa3d997471e226025c8" Namespace="calico-apiserver" Pod="calico-apiserver-586c95c678-wshk4" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-calico--apiserver--586c95c678--wshk4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--0--0--n--200d586c0a-k8s-calico--apiserver--586c95c678--wshk4-eth0", GenerateName:"calico-apiserver-586c95c678-", Namespace:"calico-apiserver", SelfLink:"", UID:"f4cc43e3-456a-4c62-a062-12998d6b9b30", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 54, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"586c95c678", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-0-0-n-200d586c0a", ContainerID:"", Pod:"calico-apiserver-586c95c678-wshk4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.101.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieb9db5e66db", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:54:31.647244 containerd[1569]: 2025-09-16 04:54:31.610 [INFO][4214] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.101.131/32] ContainerID="9a8f6ac56ba90975fb85b32a674547e575e7028417f50fa3d997471e226025c8" Namespace="calico-apiserver" Pod="calico-apiserver-586c95c678-wshk4" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-calico--apiserver--586c95c678--wshk4-eth0" Sep 16 04:54:31.647244 containerd[1569]: 2025-09-16 04:54:31.610 [INFO][4214] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieb9db5e66db ContainerID="9a8f6ac56ba90975fb85b32a674547e575e7028417f50fa3d997471e226025c8" Namespace="calico-apiserver" Pod="calico-apiserver-586c95c678-wshk4" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-calico--apiserver--586c95c678--wshk4-eth0" Sep 16 04:54:31.647244 containerd[1569]: 2025-09-16 04:54:31.627 [INFO][4214] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9a8f6ac56ba90975fb85b32a674547e575e7028417f50fa3d997471e226025c8" Namespace="calico-apiserver" Pod="calico-apiserver-586c95c678-wshk4" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-calico--apiserver--586c95c678--wshk4-eth0" Sep 16 04:54:31.647244 containerd[1569]: 2025-09-16 04:54:31.629 [INFO][4214] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9a8f6ac56ba90975fb85b32a674547e575e7028417f50fa3d997471e226025c8" Namespace="calico-apiserver" Pod="calico-apiserver-586c95c678-wshk4" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-calico--apiserver--586c95c678--wshk4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--0--0--n--200d586c0a-k8s-calico--apiserver--586c95c678--wshk4-eth0", GenerateName:"calico-apiserver-586c95c678-", Namespace:"calico-apiserver", SelfLink:"", UID:"f4cc43e3-456a-4c62-a062-12998d6b9b30", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 54, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"586c95c678", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-0-0-n-200d586c0a", ContainerID:"9a8f6ac56ba90975fb85b32a674547e575e7028417f50fa3d997471e226025c8", Pod:"calico-apiserver-586c95c678-wshk4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.101.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieb9db5e66db", MAC:"e6:5e:83:bf:23:4b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:54:31.647244 containerd[1569]: 2025-09-16 04:54:31.643 [INFO][4214] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9a8f6ac56ba90975fb85b32a674547e575e7028417f50fa3d997471e226025c8" Namespace="calico-apiserver" Pod="calico-apiserver-586c95c678-wshk4" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-calico--apiserver--586c95c678--wshk4-eth0" Sep 16 04:54:31.659277 containerd[1569]: time="2025-09-16T04:54:31.659249445Z" level=info msg="Container eb3c6c4b947bd78546f0d50d654fea614d128a6fe36990163fec55a28526fa9e: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:54:31.659845 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3196885401.mount: Deactivated successfully. Sep 16 04:54:31.664455 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount873583308.mount: Deactivated successfully. Sep 16 04:54:31.673602 containerd[1569]: time="2025-09-16T04:54:31.673571807Z" level=info msg="CreateContainer within sandbox \"95b4f7c10b1849a5354ba9cfc0f5db530ed87d6be092fbd35608c1bde7c7e050\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"eb3c6c4b947bd78546f0d50d654fea614d128a6fe36990163fec55a28526fa9e\"" Sep 16 04:54:31.675325 containerd[1569]: time="2025-09-16T04:54:31.674483689Z" level=info msg="StartContainer for \"eb3c6c4b947bd78546f0d50d654fea614d128a6fe36990163fec55a28526fa9e\"" Sep 16 04:54:31.675325 containerd[1569]: time="2025-09-16T04:54:31.675044453Z" level=info msg="connecting to shim eb3c6c4b947bd78546f0d50d654fea614d128a6fe36990163fec55a28526fa9e" address="unix:///run/containerd/s/2a4bd2146fd1205871f52c786fb2f661a499f1749084552a4f6dcdd47270ef00" protocol=ttrpc version=3 Sep 16 04:54:31.680687 containerd[1569]: time="2025-09-16T04:54:31.680662820Z" level=info msg="connecting to shim 9a8f6ac56ba90975fb85b32a674547e575e7028417f50fa3d997471e226025c8" address="unix:///run/containerd/s/4714ab23ff973b52a10c728a78f1097632025c1e2ded92719bde742af0a4617a" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:54:31.700624 systemd[1]: Started cri-containerd-eb3c6c4b947bd78546f0d50d654fea614d128a6fe36990163fec55a28526fa9e.scope - libcontainer container eb3c6c4b947bd78546f0d50d654fea614d128a6fe36990163fec55a28526fa9e. Sep 16 04:54:31.710637 systemd[1]: Started cri-containerd-9a8f6ac56ba90975fb85b32a674547e575e7028417f50fa3d997471e226025c8.scope - libcontainer container 9a8f6ac56ba90975fb85b32a674547e575e7028417f50fa3d997471e226025c8. Sep 16 04:54:31.742528 systemd-networkd[1479]: calideb808ce55f: Link UP Sep 16 04:54:31.744316 systemd-networkd[1479]: calideb808ce55f: Gained carrier Sep 16 04:54:31.769519 containerd[1569]: time="2025-09-16T04:54:31.767990407Z" level=info msg="StartContainer for \"eb3c6c4b947bd78546f0d50d654fea614d128a6fe36990163fec55a28526fa9e\" returns successfully" Sep 16 04:54:31.772984 containerd[1569]: 2025-09-16 04:54:31.419 [INFO][4206] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--0--0--n--200d586c0a-k8s-goldmane--54d579b49d--725zd-eth0 goldmane-54d579b49d- calico-system f5f9912e-f440-4935-b278-a8b4b136786d 791 0 2025-09-16 04:54:09 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459-0-0-n-200d586c0a goldmane-54d579b49d-725zd eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calideb808ce55f [] [] }} ContainerID="c2f4c514494320170e8343c8e4688ed2006813631c537acf18a175430027b523" Namespace="calico-system" Pod="goldmane-54d579b49d-725zd" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-goldmane--54d579b49d--725zd-" Sep 16 04:54:31.772984 containerd[1569]: 2025-09-16 04:54:31.419 [INFO][4206] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c2f4c514494320170e8343c8e4688ed2006813631c537acf18a175430027b523" Namespace="calico-system" Pod="goldmane-54d579b49d-725zd" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-goldmane--54d579b49d--725zd-eth0" Sep 16 04:54:31.772984 containerd[1569]: 2025-09-16 04:54:31.477 [INFO][4247] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c2f4c514494320170e8343c8e4688ed2006813631c537acf18a175430027b523" HandleID="k8s-pod-network.c2f4c514494320170e8343c8e4688ed2006813631c537acf18a175430027b523" Workload="ci--4459--0--0--n--200d586c0a-k8s-goldmane--54d579b49d--725zd-eth0" Sep 16 04:54:31.772984 containerd[1569]: 2025-09-16 04:54:31.477 [INFO][4247] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c2f4c514494320170e8343c8e4688ed2006813631c537acf18a175430027b523" HandleID="k8s-pod-network.c2f4c514494320170e8343c8e4688ed2006813631c537acf18a175430027b523" Workload="ci--4459--0--0--n--200d586c0a-k8s-goldmane--54d579b49d--725zd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5b90), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-0-0-n-200d586c0a", "pod":"goldmane-54d579b49d-725zd", "timestamp":"2025-09-16 04:54:31.474196193 +0000 UTC"}, Hostname:"ci-4459-0-0-n-200d586c0a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:54:31.772984 containerd[1569]: 2025-09-16 04:54:31.477 [INFO][4247] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:54:31.772984 containerd[1569]: 2025-09-16 04:54:31.607 [INFO][4247] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:54:31.772984 containerd[1569]: 2025-09-16 04:54:31.607 [INFO][4247] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-0-0-n-200d586c0a' Sep 16 04:54:31.772984 containerd[1569]: 2025-09-16 04:54:31.680 [INFO][4247] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c2f4c514494320170e8343c8e4688ed2006813631c537acf18a175430027b523" host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:31.772984 containerd[1569]: 2025-09-16 04:54:31.687 [INFO][4247] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:31.772984 containerd[1569]: 2025-09-16 04:54:31.697 [INFO][4247] ipam/ipam.go 511: Trying affinity for 192.168.101.128/26 host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:31.772984 containerd[1569]: 2025-09-16 04:54:31.699 [INFO][4247] ipam/ipam.go 158: Attempting to load block cidr=192.168.101.128/26 host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:31.772984 containerd[1569]: 2025-09-16 04:54:31.703 [INFO][4247] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.101.128/26 host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:31.772984 containerd[1569]: 2025-09-16 04:54:31.703 [INFO][4247] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.101.128/26 handle="k8s-pod-network.c2f4c514494320170e8343c8e4688ed2006813631c537acf18a175430027b523" host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:31.772984 containerd[1569]: 2025-09-16 04:54:31.706 [INFO][4247] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c2f4c514494320170e8343c8e4688ed2006813631c537acf18a175430027b523 Sep 16 04:54:31.772984 containerd[1569]: 2025-09-16 04:54:31.713 [INFO][4247] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.101.128/26 handle="k8s-pod-network.c2f4c514494320170e8343c8e4688ed2006813631c537acf18a175430027b523" host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:31.772984 containerd[1569]: 2025-09-16 04:54:31.722 [INFO][4247] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.101.132/26] block=192.168.101.128/26 handle="k8s-pod-network.c2f4c514494320170e8343c8e4688ed2006813631c537acf18a175430027b523" host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:31.772984 containerd[1569]: 2025-09-16 04:54:31.722 [INFO][4247] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.101.132/26] handle="k8s-pod-network.c2f4c514494320170e8343c8e4688ed2006813631c537acf18a175430027b523" host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:31.772984 containerd[1569]: 2025-09-16 04:54:31.722 [INFO][4247] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:54:31.772984 containerd[1569]: 2025-09-16 04:54:31.722 [INFO][4247] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.101.132/26] IPv6=[] ContainerID="c2f4c514494320170e8343c8e4688ed2006813631c537acf18a175430027b523" HandleID="k8s-pod-network.c2f4c514494320170e8343c8e4688ed2006813631c537acf18a175430027b523" Workload="ci--4459--0--0--n--200d586c0a-k8s-goldmane--54d579b49d--725zd-eth0" Sep 16 04:54:31.775450 containerd[1569]: 2025-09-16 04:54:31.728 [INFO][4206] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c2f4c514494320170e8343c8e4688ed2006813631c537acf18a175430027b523" Namespace="calico-system" Pod="goldmane-54d579b49d-725zd" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-goldmane--54d579b49d--725zd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--0--0--n--200d586c0a-k8s-goldmane--54d579b49d--725zd-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"f5f9912e-f440-4935-b278-a8b4b136786d", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 54, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-0-0-n-200d586c0a", ContainerID:"", Pod:"goldmane-54d579b49d-725zd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.101.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calideb808ce55f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:54:31.775450 containerd[1569]: 2025-09-16 04:54:31.729 [INFO][4206] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.101.132/32] ContainerID="c2f4c514494320170e8343c8e4688ed2006813631c537acf18a175430027b523" Namespace="calico-system" Pod="goldmane-54d579b49d-725zd" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-goldmane--54d579b49d--725zd-eth0" Sep 16 04:54:31.775450 containerd[1569]: 2025-09-16 04:54:31.730 [INFO][4206] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calideb808ce55f ContainerID="c2f4c514494320170e8343c8e4688ed2006813631c537acf18a175430027b523" Namespace="calico-system" Pod="goldmane-54d579b49d-725zd" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-goldmane--54d579b49d--725zd-eth0" Sep 16 04:54:31.775450 containerd[1569]: 2025-09-16 04:54:31.748 [INFO][4206] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c2f4c514494320170e8343c8e4688ed2006813631c537acf18a175430027b523" Namespace="calico-system" Pod="goldmane-54d579b49d-725zd" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-goldmane--54d579b49d--725zd-eth0" Sep 16 04:54:31.775450 containerd[1569]: 2025-09-16 04:54:31.749 [INFO][4206] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c2f4c514494320170e8343c8e4688ed2006813631c537acf18a175430027b523" Namespace="calico-system" Pod="goldmane-54d579b49d-725zd" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-goldmane--54d579b49d--725zd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--0--0--n--200d586c0a-k8s-goldmane--54d579b49d--725zd-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"f5f9912e-f440-4935-b278-a8b4b136786d", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 54, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-0-0-n-200d586c0a", ContainerID:"c2f4c514494320170e8343c8e4688ed2006813631c537acf18a175430027b523", Pod:"goldmane-54d579b49d-725zd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.101.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calideb808ce55f", MAC:"32:2a:00:0b:da:56", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:54:31.775450 containerd[1569]: 2025-09-16 04:54:31.765 [INFO][4206] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c2f4c514494320170e8343c8e4688ed2006813631c537acf18a175430027b523" Namespace="calico-system" Pod="goldmane-54d579b49d-725zd" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-goldmane--54d579b49d--725zd-eth0" Sep 16 04:54:31.799128 containerd[1569]: time="2025-09-16T04:54:31.799074793Z" level=info msg="connecting to shim c2f4c514494320170e8343c8e4688ed2006813631c537acf18a175430027b523" address="unix:///run/containerd/s/31d7ab0d4ee0962f12972db79d1b7e1752c337077165da4b2a8bc8686eebf08a" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:54:31.826988 containerd[1569]: time="2025-09-16T04:54:31.826911868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-586c95c678-wshk4,Uid:f4cc43e3-456a-4c62-a062-12998d6b9b30,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9a8f6ac56ba90975fb85b32a674547e575e7028417f50fa3d997471e226025c8\"" Sep 16 04:54:31.833725 systemd[1]: Started cri-containerd-c2f4c514494320170e8343c8e4688ed2006813631c537acf18a175430027b523.scope - libcontainer container c2f4c514494320170e8343c8e4688ed2006813631c537acf18a175430027b523. Sep 16 04:54:31.891257 containerd[1569]: time="2025-09-16T04:54:31.891211627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-725zd,Uid:f5f9912e-f440-4935-b278-a8b4b136786d,Namespace:calico-system,Attempt:0,} returns sandbox id \"c2f4c514494320170e8343c8e4688ed2006813631c537acf18a175430027b523\"" Sep 16 04:54:32.305475 containerd[1569]: time="2025-09-16T04:54:32.305423229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-586c95c678-xqkfq,Uid:2cf15689-32b5-4778-b0c2-c93602dd5a17,Namespace:calico-apiserver,Attempt:0,}" Sep 16 04:54:32.305794 containerd[1569]: time="2025-09-16T04:54:32.305717702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79f564b968-87w2g,Uid:d9df7419-2dd3-4168-a272-4c33962cc46a,Namespace:calico-system,Attempt:0,}" Sep 16 04:54:32.306640 containerd[1569]: time="2025-09-16T04:54:32.306114939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q7fw9,Uid:c7b14ac7-1066-42fb-b1ca-cd746b21268d,Namespace:kube-system,Attempt:0,}" Sep 16 04:54:32.462413 systemd-networkd[1479]: calic9893038e4d: Link UP Sep 16 04:54:32.464850 systemd-networkd[1479]: calic9893038e4d: Gained carrier Sep 16 04:54:32.497753 containerd[1569]: 2025-09-16 04:54:32.384 [INFO][4483] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--0--0--n--200d586c0a-k8s-coredns--668d6bf9bc--q7fw9-eth0 coredns-668d6bf9bc- kube-system c7b14ac7-1066-42fb-b1ca-cd746b21268d 790 0 2025-09-16 04:53:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-0-0-n-200d586c0a coredns-668d6bf9bc-q7fw9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic9893038e4d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="cf204037489a3bffade84d80d79557d79fe8e69cc918c465e0661255871ec505" Namespace="kube-system" Pod="coredns-668d6bf9bc-q7fw9" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-coredns--668d6bf9bc--q7fw9-" Sep 16 04:54:32.497753 containerd[1569]: 2025-09-16 04:54:32.384 [INFO][4483] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cf204037489a3bffade84d80d79557d79fe8e69cc918c465e0661255871ec505" Namespace="kube-system" Pod="coredns-668d6bf9bc-q7fw9" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-coredns--668d6bf9bc--q7fw9-eth0" Sep 16 04:54:32.497753 containerd[1569]: 2025-09-16 04:54:32.416 [INFO][4516] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cf204037489a3bffade84d80d79557d79fe8e69cc918c465e0661255871ec505" HandleID="k8s-pod-network.cf204037489a3bffade84d80d79557d79fe8e69cc918c465e0661255871ec505" Workload="ci--4459--0--0--n--200d586c0a-k8s-coredns--668d6bf9bc--q7fw9-eth0" Sep 16 04:54:32.497753 containerd[1569]: 2025-09-16 04:54:32.416 [INFO][4516] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cf204037489a3bffade84d80d79557d79fe8e69cc918c465e0661255871ec505" HandleID="k8s-pod-network.cf204037489a3bffade84d80d79557d79fe8e69cc918c465e0661255871ec505" Workload="ci--4459--0--0--n--200d586c0a-k8s-coredns--668d6bf9bc--q7fw9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5710), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-0-0-n-200d586c0a", "pod":"coredns-668d6bf9bc-q7fw9", "timestamp":"2025-09-16 04:54:32.416641659 +0000 UTC"}, Hostname:"ci-4459-0-0-n-200d586c0a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:54:32.497753 containerd[1569]: 2025-09-16 04:54:32.416 [INFO][4516] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:54:32.497753 containerd[1569]: 2025-09-16 04:54:32.416 [INFO][4516] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:54:32.497753 containerd[1569]: 2025-09-16 04:54:32.416 [INFO][4516] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-0-0-n-200d586c0a' Sep 16 04:54:32.497753 containerd[1569]: 2025-09-16 04:54:32.432 [INFO][4516] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cf204037489a3bffade84d80d79557d79fe8e69cc918c465e0661255871ec505" host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:32.497753 containerd[1569]: 2025-09-16 04:54:32.436 [INFO][4516] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:32.497753 containerd[1569]: 2025-09-16 04:54:32.440 [INFO][4516] ipam/ipam.go 511: Trying affinity for 192.168.101.128/26 host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:32.497753 containerd[1569]: 2025-09-16 04:54:32.441 [INFO][4516] ipam/ipam.go 158: Attempting to load block cidr=192.168.101.128/26 host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:32.497753 containerd[1569]: 2025-09-16 04:54:32.443 [INFO][4516] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.101.128/26 host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:32.497753 containerd[1569]: 2025-09-16 04:54:32.443 [INFO][4516] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.101.128/26 handle="k8s-pod-network.cf204037489a3bffade84d80d79557d79fe8e69cc918c465e0661255871ec505" host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:32.497753 containerd[1569]: 2025-09-16 04:54:32.444 [INFO][4516] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cf204037489a3bffade84d80d79557d79fe8e69cc918c465e0661255871ec505 Sep 16 04:54:32.497753 containerd[1569]: 2025-09-16 04:54:32.448 [INFO][4516] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.101.128/26 handle="k8s-pod-network.cf204037489a3bffade84d80d79557d79fe8e69cc918c465e0661255871ec505" host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:32.497753 containerd[1569]: 2025-09-16 04:54:32.453 [INFO][4516] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.101.133/26] block=192.168.101.128/26 handle="k8s-pod-network.cf204037489a3bffade84d80d79557d79fe8e69cc918c465e0661255871ec505" host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:32.497753 containerd[1569]: 2025-09-16 04:54:32.454 [INFO][4516] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.101.133/26] handle="k8s-pod-network.cf204037489a3bffade84d80d79557d79fe8e69cc918c465e0661255871ec505" host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:32.497753 containerd[1569]: 2025-09-16 04:54:32.454 [INFO][4516] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:54:32.497753 containerd[1569]: 2025-09-16 04:54:32.454 [INFO][4516] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.101.133/26] IPv6=[] ContainerID="cf204037489a3bffade84d80d79557d79fe8e69cc918c465e0661255871ec505" HandleID="k8s-pod-network.cf204037489a3bffade84d80d79557d79fe8e69cc918c465e0661255871ec505" Workload="ci--4459--0--0--n--200d586c0a-k8s-coredns--668d6bf9bc--q7fw9-eth0" Sep 16 04:54:32.498483 containerd[1569]: 2025-09-16 04:54:32.456 [INFO][4483] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cf204037489a3bffade84d80d79557d79fe8e69cc918c465e0661255871ec505" Namespace="kube-system" Pod="coredns-668d6bf9bc-q7fw9" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-coredns--668d6bf9bc--q7fw9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--0--0--n--200d586c0a-k8s-coredns--668d6bf9bc--q7fw9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c7b14ac7-1066-42fb-b1ca-cd746b21268d", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 53, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-0-0-n-200d586c0a", ContainerID:"", Pod:"coredns-668d6bf9bc-q7fw9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.101.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic9893038e4d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:54:32.498483 containerd[1569]: 2025-09-16 04:54:32.456 [INFO][4483] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.101.133/32] ContainerID="cf204037489a3bffade84d80d79557d79fe8e69cc918c465e0661255871ec505" Namespace="kube-system" Pod="coredns-668d6bf9bc-q7fw9" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-coredns--668d6bf9bc--q7fw9-eth0" Sep 16 04:54:32.498483 containerd[1569]: 2025-09-16 04:54:32.456 [INFO][4483] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic9893038e4d ContainerID="cf204037489a3bffade84d80d79557d79fe8e69cc918c465e0661255871ec505" Namespace="kube-system" Pod="coredns-668d6bf9bc-q7fw9" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-coredns--668d6bf9bc--q7fw9-eth0" Sep 16 04:54:32.498483 containerd[1569]: 2025-09-16 04:54:32.465 [INFO][4483] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cf204037489a3bffade84d80d79557d79fe8e69cc918c465e0661255871ec505" Namespace="kube-system" Pod="coredns-668d6bf9bc-q7fw9" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-coredns--668d6bf9bc--q7fw9-eth0" Sep 16 04:54:32.498483 containerd[1569]: 2025-09-16 04:54:32.465 [INFO][4483] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cf204037489a3bffade84d80d79557d79fe8e69cc918c465e0661255871ec505" Namespace="kube-system" Pod="coredns-668d6bf9bc-q7fw9" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-coredns--668d6bf9bc--q7fw9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--0--0--n--200d586c0a-k8s-coredns--668d6bf9bc--q7fw9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c7b14ac7-1066-42fb-b1ca-cd746b21268d", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 53, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-0-0-n-200d586c0a", ContainerID:"cf204037489a3bffade84d80d79557d79fe8e69cc918c465e0661255871ec505", Pod:"coredns-668d6bf9bc-q7fw9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.101.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic9893038e4d", MAC:"12:b5:78:be:87:13", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:54:32.498483 containerd[1569]: 2025-09-16 04:54:32.488 [INFO][4483] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cf204037489a3bffade84d80d79557d79fe8e69cc918c465e0661255871ec505" Namespace="kube-system" Pod="coredns-668d6bf9bc-q7fw9" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-coredns--668d6bf9bc--q7fw9-eth0" Sep 16 04:54:32.505266 kubelet[2749]: I0916 04:54:32.505020 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-c44xj" podStartSLOduration=35.504872041 podStartE2EDuration="35.504872041s" podCreationTimestamp="2025-09-16 04:53:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:54:32.504060616 +0000 UTC m=+42.305111823" watchObservedRunningTime="2025-09-16 04:54:32.504872041 +0000 UTC m=+42.305923248" Sep 16 04:54:32.530168 containerd[1569]: time="2025-09-16T04:54:32.530113924Z" level=info msg="connecting to shim cf204037489a3bffade84d80d79557d79fe8e69cc918c465e0661255871ec505" address="unix:///run/containerd/s/d482634a9e6ee9eff9aaf5f6137b473890991c824744365ed171e218f16f10ab" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:54:32.574617 systemd[1]: Started cri-containerd-cf204037489a3bffade84d80d79557d79fe8e69cc918c465e0661255871ec505.scope - libcontainer container cf204037489a3bffade84d80d79557d79fe8e69cc918c465e0661255871ec505. Sep 16 04:54:32.599691 systemd-networkd[1479]: calif678f323967: Link UP Sep 16 04:54:32.601091 systemd-networkd[1479]: calif678f323967: Gained carrier Sep 16 04:54:32.618607 containerd[1569]: 2025-09-16 04:54:32.377 [INFO][4469] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--0--0--n--200d586c0a-k8s-calico--apiserver--586c95c678--xqkfq-eth0 calico-apiserver-586c95c678- calico-apiserver 2cf15689-32b5-4778-b0c2-c93602dd5a17 788 0 2025-09-16 04:54:06 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:586c95c678 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-0-0-n-200d586c0a calico-apiserver-586c95c678-xqkfq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif678f323967 [] [] }} ContainerID="690cf88fd665c68d482306766a6ab8ea9a92e5273dbbba2408517eb239bb25a4" Namespace="calico-apiserver" Pod="calico-apiserver-586c95c678-xqkfq" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-calico--apiserver--586c95c678--xqkfq-" Sep 16 04:54:32.618607 containerd[1569]: 2025-09-16 04:54:32.378 [INFO][4469] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="690cf88fd665c68d482306766a6ab8ea9a92e5273dbbba2408517eb239bb25a4" Namespace="calico-apiserver" Pod="calico-apiserver-586c95c678-xqkfq" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-calico--apiserver--586c95c678--xqkfq-eth0" Sep 16 04:54:32.618607 containerd[1569]: 2025-09-16 04:54:32.429 [INFO][4514] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="690cf88fd665c68d482306766a6ab8ea9a92e5273dbbba2408517eb239bb25a4" HandleID="k8s-pod-network.690cf88fd665c68d482306766a6ab8ea9a92e5273dbbba2408517eb239bb25a4" Workload="ci--4459--0--0--n--200d586c0a-k8s-calico--apiserver--586c95c678--xqkfq-eth0" Sep 16 04:54:32.618607 containerd[1569]: 2025-09-16 04:54:32.429 [INFO][4514] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="690cf88fd665c68d482306766a6ab8ea9a92e5273dbbba2408517eb239bb25a4" HandleID="k8s-pod-network.690cf88fd665c68d482306766a6ab8ea9a92e5273dbbba2408517eb239bb25a4" Workload="ci--4459--0--0--n--200d586c0a-k8s-calico--apiserver--586c95c678--xqkfq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5730), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459-0-0-n-200d586c0a", "pod":"calico-apiserver-586c95c678-xqkfq", "timestamp":"2025-09-16 04:54:32.428992473 +0000 UTC"}, Hostname:"ci-4459-0-0-n-200d586c0a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:54:32.618607 containerd[1569]: 2025-09-16 04:54:32.429 [INFO][4514] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:54:32.618607 containerd[1569]: 2025-09-16 04:54:32.454 [INFO][4514] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:54:32.618607 containerd[1569]: 2025-09-16 04:54:32.455 [INFO][4514] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-0-0-n-200d586c0a' Sep 16 04:54:32.618607 containerd[1569]: 2025-09-16 04:54:32.532 [INFO][4514] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.690cf88fd665c68d482306766a6ab8ea9a92e5273dbbba2408517eb239bb25a4" host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:32.618607 containerd[1569]: 2025-09-16 04:54:32.541 [INFO][4514] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:32.618607 containerd[1569]: 2025-09-16 04:54:32.558 [INFO][4514] ipam/ipam.go 511: Trying affinity for 192.168.101.128/26 host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:32.618607 containerd[1569]: 2025-09-16 04:54:32.566 [INFO][4514] ipam/ipam.go 158: Attempting to load block cidr=192.168.101.128/26 host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:32.618607 containerd[1569]: 2025-09-16 04:54:32.571 [INFO][4514] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.101.128/26 host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:32.618607 containerd[1569]: 2025-09-16 04:54:32.571 [INFO][4514] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.101.128/26 handle="k8s-pod-network.690cf88fd665c68d482306766a6ab8ea9a92e5273dbbba2408517eb239bb25a4" host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:32.618607 containerd[1569]: 2025-09-16 04:54:32.573 [INFO][4514] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.690cf88fd665c68d482306766a6ab8ea9a92e5273dbbba2408517eb239bb25a4 Sep 16 04:54:32.618607 containerd[1569]: 2025-09-16 04:54:32.583 [INFO][4514] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.101.128/26 handle="k8s-pod-network.690cf88fd665c68d482306766a6ab8ea9a92e5273dbbba2408517eb239bb25a4" host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:32.618607 containerd[1569]: 2025-09-16 04:54:32.589 [INFO][4514] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.101.134/26] block=192.168.101.128/26 handle="k8s-pod-network.690cf88fd665c68d482306766a6ab8ea9a92e5273dbbba2408517eb239bb25a4" host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:32.618607 containerd[1569]: 2025-09-16 04:54:32.590 [INFO][4514] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.101.134/26] handle="k8s-pod-network.690cf88fd665c68d482306766a6ab8ea9a92e5273dbbba2408517eb239bb25a4" host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:32.618607 containerd[1569]: 2025-09-16 04:54:32.590 [INFO][4514] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:54:32.618607 containerd[1569]: 2025-09-16 04:54:32.592 [INFO][4514] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.101.134/26] IPv6=[] ContainerID="690cf88fd665c68d482306766a6ab8ea9a92e5273dbbba2408517eb239bb25a4" HandleID="k8s-pod-network.690cf88fd665c68d482306766a6ab8ea9a92e5273dbbba2408517eb239bb25a4" Workload="ci--4459--0--0--n--200d586c0a-k8s-calico--apiserver--586c95c678--xqkfq-eth0" Sep 16 04:54:32.620677 containerd[1569]: 2025-09-16 04:54:32.596 [INFO][4469] cni-plugin/k8s.go 418: Populated endpoint ContainerID="690cf88fd665c68d482306766a6ab8ea9a92e5273dbbba2408517eb239bb25a4" Namespace="calico-apiserver" Pod="calico-apiserver-586c95c678-xqkfq" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-calico--apiserver--586c95c678--xqkfq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--0--0--n--200d586c0a-k8s-calico--apiserver--586c95c678--xqkfq-eth0", GenerateName:"calico-apiserver-586c95c678-", Namespace:"calico-apiserver", SelfLink:"", UID:"2cf15689-32b5-4778-b0c2-c93602dd5a17", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 54, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"586c95c678", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-0-0-n-200d586c0a", ContainerID:"", Pod:"calico-apiserver-586c95c678-xqkfq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.101.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif678f323967", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:54:32.620677 containerd[1569]: 2025-09-16 04:54:32.596 [INFO][4469] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.101.134/32] ContainerID="690cf88fd665c68d482306766a6ab8ea9a92e5273dbbba2408517eb239bb25a4" Namespace="calico-apiserver" Pod="calico-apiserver-586c95c678-xqkfq" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-calico--apiserver--586c95c678--xqkfq-eth0" Sep 16 04:54:32.620677 containerd[1569]: 2025-09-16 04:54:32.596 [INFO][4469] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif678f323967 ContainerID="690cf88fd665c68d482306766a6ab8ea9a92e5273dbbba2408517eb239bb25a4" Namespace="calico-apiserver" Pod="calico-apiserver-586c95c678-xqkfq" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-calico--apiserver--586c95c678--xqkfq-eth0" Sep 16 04:54:32.620677 containerd[1569]: 2025-09-16 04:54:32.602 [INFO][4469] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="690cf88fd665c68d482306766a6ab8ea9a92e5273dbbba2408517eb239bb25a4" Namespace="calico-apiserver" Pod="calico-apiserver-586c95c678-xqkfq" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-calico--apiserver--586c95c678--xqkfq-eth0" Sep 16 04:54:32.620677 containerd[1569]: 2025-09-16 04:54:32.602 [INFO][4469] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="690cf88fd665c68d482306766a6ab8ea9a92e5273dbbba2408517eb239bb25a4" Namespace="calico-apiserver" Pod="calico-apiserver-586c95c678-xqkfq" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-calico--apiserver--586c95c678--xqkfq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--0--0--n--200d586c0a-k8s-calico--apiserver--586c95c678--xqkfq-eth0", GenerateName:"calico-apiserver-586c95c678-", Namespace:"calico-apiserver", SelfLink:"", UID:"2cf15689-32b5-4778-b0c2-c93602dd5a17", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 54, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"586c95c678", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-0-0-n-200d586c0a", ContainerID:"690cf88fd665c68d482306766a6ab8ea9a92e5273dbbba2408517eb239bb25a4", Pod:"calico-apiserver-586c95c678-xqkfq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.101.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif678f323967", MAC:"e6:2d:bc:ab:0e:3c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:54:32.620677 containerd[1569]: 2025-09-16 04:54:32.616 [INFO][4469] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="690cf88fd665c68d482306766a6ab8ea9a92e5273dbbba2408517eb239bb25a4" Namespace="calico-apiserver" Pod="calico-apiserver-586c95c678-xqkfq" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-calico--apiserver--586c95c678--xqkfq-eth0" Sep 16 04:54:32.670041 containerd[1569]: time="2025-09-16T04:54:32.669792583Z" level=info msg="connecting to shim 690cf88fd665c68d482306766a6ab8ea9a92e5273dbbba2408517eb239bb25a4" address="unix:///run/containerd/s/c600837e53c3fa08229685a386fb72ba497c97e87278a21d631022f69f7f8cc4" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:54:32.701177 containerd[1569]: time="2025-09-16T04:54:32.700922789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q7fw9,Uid:c7b14ac7-1066-42fb-b1ca-cd746b21268d,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf204037489a3bffade84d80d79557d79fe8e69cc918c465e0661255871ec505\"" Sep 16 04:54:32.705922 containerd[1569]: time="2025-09-16T04:54:32.705194384Z" level=info msg="CreateContainer within sandbox \"cf204037489a3bffade84d80d79557d79fe8e69cc918c465e0661255871ec505\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 16 04:54:32.715230 systemd-networkd[1479]: cali45ca8505b31: Link UP Sep 16 04:54:32.715753 systemd-networkd[1479]: cali45ca8505b31: Gained carrier Sep 16 04:54:32.727808 systemd-networkd[1479]: cali3167b8891da: Gained IPv6LL Sep 16 04:54:32.727934 systemd[1]: Started cri-containerd-690cf88fd665c68d482306766a6ab8ea9a92e5273dbbba2408517eb239bb25a4.scope - libcontainer container 690cf88fd665c68d482306766a6ab8ea9a92e5273dbbba2408517eb239bb25a4. Sep 16 04:54:32.741916 containerd[1569]: time="2025-09-16T04:54:32.741817930Z" level=info msg="Container 4ba97a663c087d2c552bda81f93bb4462e9f028c8f3785fbdda11b899c0e44ab: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:54:32.759182 containerd[1569]: 2025-09-16 04:54:32.374 [INFO][4474] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--0--0--n--200d586c0a-k8s-calico--kube--controllers--79f564b968--87w2g-eth0 calico-kube-controllers-79f564b968- calico-system d9df7419-2dd3-4168-a272-4c33962cc46a 789 0 2025-09-16 04:54:09 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:79f564b968 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459-0-0-n-200d586c0a calico-kube-controllers-79f564b968-87w2g eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali45ca8505b31 [] [] }} ContainerID="cd4d5f26568fcdbbabd7af7c3161f51d0aac6147e989b3748cccd59a5a1c78fc" Namespace="calico-system" Pod="calico-kube-controllers-79f564b968-87w2g" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-calico--kube--controllers--79f564b968--87w2g-" Sep 16 04:54:32.759182 containerd[1569]: 2025-09-16 04:54:32.375 [INFO][4474] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cd4d5f26568fcdbbabd7af7c3161f51d0aac6147e989b3748cccd59a5a1c78fc" Namespace="calico-system" Pod="calico-kube-controllers-79f564b968-87w2g" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-calico--kube--controllers--79f564b968--87w2g-eth0" Sep 16 04:54:32.759182 containerd[1569]: 2025-09-16 04:54:32.433 [INFO][4509] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cd4d5f26568fcdbbabd7af7c3161f51d0aac6147e989b3748cccd59a5a1c78fc" HandleID="k8s-pod-network.cd4d5f26568fcdbbabd7af7c3161f51d0aac6147e989b3748cccd59a5a1c78fc" Workload="ci--4459--0--0--n--200d586c0a-k8s-calico--kube--controllers--79f564b968--87w2g-eth0" Sep 16 04:54:32.759182 containerd[1569]: 2025-09-16 04:54:32.433 [INFO][4509] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cd4d5f26568fcdbbabd7af7c3161f51d0aac6147e989b3748cccd59a5a1c78fc" HandleID="k8s-pod-network.cd4d5f26568fcdbbabd7af7c3161f51d0aac6147e989b3748cccd59a5a1c78fc" Workload="ci--4459--0--0--n--200d586c0a-k8s-calico--kube--controllers--79f564b968--87w2g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003abb70), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-0-0-n-200d586c0a", "pod":"calico-kube-controllers-79f564b968-87w2g", "timestamp":"2025-09-16 04:54:32.432140628 +0000 UTC"}, Hostname:"ci-4459-0-0-n-200d586c0a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:54:32.759182 containerd[1569]: 2025-09-16 04:54:32.433 [INFO][4509] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:54:32.759182 containerd[1569]: 2025-09-16 04:54:32.590 [INFO][4509] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:54:32.759182 containerd[1569]: 2025-09-16 04:54:32.592 [INFO][4509] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-0-0-n-200d586c0a' Sep 16 04:54:32.759182 containerd[1569]: 2025-09-16 04:54:32.638 [INFO][4509] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cd4d5f26568fcdbbabd7af7c3161f51d0aac6147e989b3748cccd59a5a1c78fc" host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:32.759182 containerd[1569]: 2025-09-16 04:54:32.661 [INFO][4509] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:32.759182 containerd[1569]: 2025-09-16 04:54:32.670 [INFO][4509] ipam/ipam.go 511: Trying affinity for 192.168.101.128/26 host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:32.759182 containerd[1569]: 2025-09-16 04:54:32.673 [INFO][4509] ipam/ipam.go 158: Attempting to load block cidr=192.168.101.128/26 host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:32.759182 containerd[1569]: 2025-09-16 04:54:32.677 [INFO][4509] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.101.128/26 host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:32.759182 containerd[1569]: 2025-09-16 04:54:32.677 [INFO][4509] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.101.128/26 handle="k8s-pod-network.cd4d5f26568fcdbbabd7af7c3161f51d0aac6147e989b3748cccd59a5a1c78fc" host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:32.759182 containerd[1569]: 2025-09-16 04:54:32.679 [INFO][4509] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cd4d5f26568fcdbbabd7af7c3161f51d0aac6147e989b3748cccd59a5a1c78fc Sep 16 04:54:32.759182 containerd[1569]: 2025-09-16 04:54:32.686 [INFO][4509] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.101.128/26 handle="k8s-pod-network.cd4d5f26568fcdbbabd7af7c3161f51d0aac6147e989b3748cccd59a5a1c78fc" host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:32.759182 containerd[1569]: 2025-09-16 04:54:32.700 [INFO][4509] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.101.135/26] block=192.168.101.128/26 handle="k8s-pod-network.cd4d5f26568fcdbbabd7af7c3161f51d0aac6147e989b3748cccd59a5a1c78fc" host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:32.759182 containerd[1569]: 2025-09-16 04:54:32.701 [INFO][4509] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.101.135/26] handle="k8s-pod-network.cd4d5f26568fcdbbabd7af7c3161f51d0aac6147e989b3748cccd59a5a1c78fc" host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:32.759182 containerd[1569]: 2025-09-16 04:54:32.701 [INFO][4509] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:54:32.759182 containerd[1569]: 2025-09-16 04:54:32.701 [INFO][4509] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.101.135/26] IPv6=[] ContainerID="cd4d5f26568fcdbbabd7af7c3161f51d0aac6147e989b3748cccd59a5a1c78fc" HandleID="k8s-pod-network.cd4d5f26568fcdbbabd7af7c3161f51d0aac6147e989b3748cccd59a5a1c78fc" Workload="ci--4459--0--0--n--200d586c0a-k8s-calico--kube--controllers--79f564b968--87w2g-eth0" Sep 16 04:54:32.765211 containerd[1569]: 2025-09-16 04:54:32.710 [INFO][4474] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cd4d5f26568fcdbbabd7af7c3161f51d0aac6147e989b3748cccd59a5a1c78fc" Namespace="calico-system" Pod="calico-kube-controllers-79f564b968-87w2g" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-calico--kube--controllers--79f564b968--87w2g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--0--0--n--200d586c0a-k8s-calico--kube--controllers--79f564b968--87w2g-eth0", GenerateName:"calico-kube-controllers-79f564b968-", Namespace:"calico-system", SelfLink:"", UID:"d9df7419-2dd3-4168-a272-4c33962cc46a", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 54, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79f564b968", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-0-0-n-200d586c0a", ContainerID:"", Pod:"calico-kube-controllers-79f564b968-87w2g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.101.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali45ca8505b31", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:54:32.765211 containerd[1569]: 2025-09-16 04:54:32.710 [INFO][4474] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.101.135/32] ContainerID="cd4d5f26568fcdbbabd7af7c3161f51d0aac6147e989b3748cccd59a5a1c78fc" Namespace="calico-system" Pod="calico-kube-controllers-79f564b968-87w2g" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-calico--kube--controllers--79f564b968--87w2g-eth0" Sep 16 04:54:32.765211 containerd[1569]: 2025-09-16 04:54:32.710 [INFO][4474] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali45ca8505b31 ContainerID="cd4d5f26568fcdbbabd7af7c3161f51d0aac6147e989b3748cccd59a5a1c78fc" Namespace="calico-system" Pod="calico-kube-controllers-79f564b968-87w2g" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-calico--kube--controllers--79f564b968--87w2g-eth0" Sep 16 04:54:32.765211 containerd[1569]: 2025-09-16 04:54:32.715 [INFO][4474] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cd4d5f26568fcdbbabd7af7c3161f51d0aac6147e989b3748cccd59a5a1c78fc" Namespace="calico-system" Pod="calico-kube-controllers-79f564b968-87w2g" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-calico--kube--controllers--79f564b968--87w2g-eth0" Sep 16 04:54:32.765211 containerd[1569]: 2025-09-16 04:54:32.725 [INFO][4474] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cd4d5f26568fcdbbabd7af7c3161f51d0aac6147e989b3748cccd59a5a1c78fc" Namespace="calico-system" Pod="calico-kube-controllers-79f564b968-87w2g" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-calico--kube--controllers--79f564b968--87w2g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--0--0--n--200d586c0a-k8s-calico--kube--controllers--79f564b968--87w2g-eth0", GenerateName:"calico-kube-controllers-79f564b968-", Namespace:"calico-system", SelfLink:"", UID:"d9df7419-2dd3-4168-a272-4c33962cc46a", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 54, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79f564b968", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-0-0-n-200d586c0a", ContainerID:"cd4d5f26568fcdbbabd7af7c3161f51d0aac6147e989b3748cccd59a5a1c78fc", Pod:"calico-kube-controllers-79f564b968-87w2g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.101.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali45ca8505b31", MAC:"26:cd:48:e4:17:97", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:54:32.765211 containerd[1569]: 2025-09-16 04:54:32.753 [INFO][4474] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cd4d5f26568fcdbbabd7af7c3161f51d0aac6147e989b3748cccd59a5a1c78fc" Namespace="calico-system" Pod="calico-kube-controllers-79f564b968-87w2g" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-calico--kube--controllers--79f564b968--87w2g-eth0" Sep 16 04:54:32.781160 containerd[1569]: time="2025-09-16T04:54:32.781093088Z" level=info msg="CreateContainer within sandbox \"cf204037489a3bffade84d80d79557d79fe8e69cc918c465e0661255871ec505\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4ba97a663c087d2c552bda81f93bb4462e9f028c8f3785fbdda11b899c0e44ab\"" Sep 16 04:54:32.785165 containerd[1569]: time="2025-09-16T04:54:32.784620835Z" level=info msg="StartContainer for \"4ba97a663c087d2c552bda81f93bb4462e9f028c8f3785fbdda11b899c0e44ab\"" Sep 16 04:54:32.788205 containerd[1569]: time="2025-09-16T04:54:32.788184350Z" level=info msg="connecting to shim 4ba97a663c087d2c552bda81f93bb4462e9f028c8f3785fbdda11b899c0e44ab" address="unix:///run/containerd/s/d482634a9e6ee9eff9aaf5f6137b473890991c824744365ed171e218f16f10ab" protocol=ttrpc version=3 Sep 16 04:54:32.814338 containerd[1569]: time="2025-09-16T04:54:32.812680483Z" level=info msg="connecting to shim cd4d5f26568fcdbbabd7af7c3161f51d0aac6147e989b3748cccd59a5a1c78fc" address="unix:///run/containerd/s/c0868d0f3436344bc516a7e627464d37d2a857a1ec03486bd45f4ea208a0fbfc" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:54:32.821419 systemd[1]: Started cri-containerd-4ba97a663c087d2c552bda81f93bb4462e9f028c8f3785fbdda11b899c0e44ab.scope - libcontainer container 4ba97a663c087d2c552bda81f93bb4462e9f028c8f3785fbdda11b899c0e44ab. Sep 16 04:54:32.842059 containerd[1569]: time="2025-09-16T04:54:32.841931827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-586c95c678-xqkfq,Uid:2cf15689-32b5-4778-b0c2-c93602dd5a17,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"690cf88fd665c68d482306766a6ab8ea9a92e5273dbbba2408517eb239bb25a4\"" Sep 16 04:54:32.845287 systemd[1]: Started cri-containerd-cd4d5f26568fcdbbabd7af7c3161f51d0aac6147e989b3748cccd59a5a1c78fc.scope - libcontainer container cd4d5f26568fcdbbabd7af7c3161f51d0aac6147e989b3748cccd59a5a1c78fc. Sep 16 04:54:32.880998 containerd[1569]: time="2025-09-16T04:54:32.880954841Z" level=info msg="StartContainer for \"4ba97a663c087d2c552bda81f93bb4462e9f028c8f3785fbdda11b899c0e44ab\" returns successfully" Sep 16 04:54:32.914181 containerd[1569]: time="2025-09-16T04:54:32.914108027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79f564b968-87w2g,Uid:d9df7419-2dd3-4168-a272-4c33962cc46a,Namespace:calico-system,Attempt:0,} returns sandbox id \"cd4d5f26568fcdbbabd7af7c3161f51d0aac6147e989b3748cccd59a5a1c78fc\"" Sep 16 04:54:33.431859 systemd-networkd[1479]: calieb9db5e66db: Gained IPv6LL Sep 16 04:54:33.496166 systemd-networkd[1479]: calideb808ce55f: Gained IPv6LL Sep 16 04:54:33.530921 kubelet[2749]: I0916 04:54:33.530383 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-q7fw9" podStartSLOduration=36.530361234 podStartE2EDuration="36.530361234s" podCreationTimestamp="2025-09-16 04:53:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:54:33.515625744 +0000 UTC m=+43.316676960" watchObservedRunningTime="2025-09-16 04:54:33.530361234 +0000 UTC m=+43.331412440" Sep 16 04:54:33.552455 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4041896215.mount: Deactivated successfully. Sep 16 04:54:33.817084 systemd-networkd[1479]: calif678f323967: Gained IPv6LL Sep 16 04:54:33.879762 systemd-networkd[1479]: calic9893038e4d: Gained IPv6LL Sep 16 04:54:34.007677 systemd-networkd[1479]: cali45ca8505b31: Gained IPv6LL Sep 16 04:54:34.174738 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2602492079.mount: Deactivated successfully. Sep 16 04:54:34.194594 containerd[1569]: time="2025-09-16T04:54:34.194539688Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:34.195696 containerd[1569]: time="2025-09-16T04:54:34.195642520Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 16 04:54:34.197020 containerd[1569]: time="2025-09-16T04:54:34.196636958Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:34.198991 containerd[1569]: time="2025-09-16T04:54:34.198936607Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:34.199978 containerd[1569]: time="2025-09-16T04:54:34.199862677Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 3.559158029s" Sep 16 04:54:34.200099 containerd[1569]: time="2025-09-16T04:54:34.200080586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 16 04:54:34.202171 containerd[1569]: time="2025-09-16T04:54:34.202124666Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 16 04:54:34.204014 containerd[1569]: time="2025-09-16T04:54:34.203982164Z" level=info msg="CreateContainer within sandbox \"6b6a9a396e5d8eda024d24b39ae3c2dfb7a3bcaf9f42c4c751a77428078c017c\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 16 04:54:34.216463 containerd[1569]: time="2025-09-16T04:54:34.212815516Z" level=info msg="Container 55cdcb525f39f3aaf6d21307a05dfa744e06a7d77c46dce4ff0ade757867296d: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:54:34.221795 containerd[1569]: time="2025-09-16T04:54:34.221748925Z" level=info msg="CreateContainer within sandbox \"6b6a9a396e5d8eda024d24b39ae3c2dfb7a3bcaf9f42c4c751a77428078c017c\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"55cdcb525f39f3aaf6d21307a05dfa744e06a7d77c46dce4ff0ade757867296d\"" Sep 16 04:54:34.224577 containerd[1569]: time="2025-09-16T04:54:34.222782076Z" level=info msg="StartContainer for \"55cdcb525f39f3aaf6d21307a05dfa744e06a7d77c46dce4ff0ade757867296d\"" Sep 16 04:54:34.224577 containerd[1569]: time="2025-09-16T04:54:34.224452143Z" level=info msg="connecting to shim 55cdcb525f39f3aaf6d21307a05dfa744e06a7d77c46dce4ff0ade757867296d" address="unix:///run/containerd/s/16f7d8da41a7cd2c2f2d119f58d53188d8b628341cc697b40c33d3999ee8e862" protocol=ttrpc version=3 Sep 16 04:54:34.272760 systemd[1]: Started cri-containerd-55cdcb525f39f3aaf6d21307a05dfa744e06a7d77c46dce4ff0ade757867296d.scope - libcontainer container 55cdcb525f39f3aaf6d21307a05dfa744e06a7d77c46dce4ff0ade757867296d. Sep 16 04:54:34.305275 containerd[1569]: time="2025-09-16T04:54:34.304897935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9nfwp,Uid:fb1bf5b8-6a81-4981-94ca-ffc8e11d2e06,Namespace:calico-system,Attempt:0,}" Sep 16 04:54:34.349699 containerd[1569]: time="2025-09-16T04:54:34.349647447Z" level=info msg="StartContainer for \"55cdcb525f39f3aaf6d21307a05dfa744e06a7d77c46dce4ff0ade757867296d\" returns successfully" Sep 16 04:54:34.421201 systemd-networkd[1479]: calie05df76a772: Link UP Sep 16 04:54:34.422261 systemd-networkd[1479]: calie05df76a772: Gained carrier Sep 16 04:54:34.445597 containerd[1569]: 2025-09-16 04:54:34.347 [INFO][4769] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--0--0--n--200d586c0a-k8s-csi--node--driver--9nfwp-eth0 csi-node-driver- calico-system fb1bf5b8-6a81-4981-94ca-ffc8e11d2e06 684 0 2025-09-16 04:54:09 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459-0-0-n-200d586c0a csi-node-driver-9nfwp eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie05df76a772 [] [] }} ContainerID="48ceeef0ecdf13074d5514c20f6d7d67cd11526bfe4bdd2c2785442136da918a" Namespace="calico-system" Pod="csi-node-driver-9nfwp" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-csi--node--driver--9nfwp-" Sep 16 04:54:34.445597 containerd[1569]: 2025-09-16 04:54:34.347 [INFO][4769] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="48ceeef0ecdf13074d5514c20f6d7d67cd11526bfe4bdd2c2785442136da918a" Namespace="calico-system" Pod="csi-node-driver-9nfwp" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-csi--node--driver--9nfwp-eth0" Sep 16 04:54:34.445597 containerd[1569]: 2025-09-16 04:54:34.378 [INFO][4790] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="48ceeef0ecdf13074d5514c20f6d7d67cd11526bfe4bdd2c2785442136da918a" HandleID="k8s-pod-network.48ceeef0ecdf13074d5514c20f6d7d67cd11526bfe4bdd2c2785442136da918a" Workload="ci--4459--0--0--n--200d586c0a-k8s-csi--node--driver--9nfwp-eth0" Sep 16 04:54:34.445597 containerd[1569]: 2025-09-16 04:54:34.378 [INFO][4790] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="48ceeef0ecdf13074d5514c20f6d7d67cd11526bfe4bdd2c2785442136da918a" HandleID="k8s-pod-network.48ceeef0ecdf13074d5514c20f6d7d67cd11526bfe4bdd2c2785442136da918a" Workload="ci--4459--0--0--n--200d586c0a-k8s-csi--node--driver--9nfwp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5820), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-0-0-n-200d586c0a", "pod":"csi-node-driver-9nfwp", "timestamp":"2025-09-16 04:54:34.378022013 +0000 UTC"}, Hostname:"ci-4459-0-0-n-200d586c0a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:54:34.445597 containerd[1569]: 2025-09-16 04:54:34.378 [INFO][4790] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:54:34.445597 containerd[1569]: 2025-09-16 04:54:34.378 [INFO][4790] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:54:34.445597 containerd[1569]: 2025-09-16 04:54:34.378 [INFO][4790] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-0-0-n-200d586c0a' Sep 16 04:54:34.445597 containerd[1569]: 2025-09-16 04:54:34.385 [INFO][4790] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.48ceeef0ecdf13074d5514c20f6d7d67cd11526bfe4bdd2c2785442136da918a" host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:34.445597 containerd[1569]: 2025-09-16 04:54:34.391 [INFO][4790] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:34.445597 containerd[1569]: 2025-09-16 04:54:34.397 [INFO][4790] ipam/ipam.go 511: Trying affinity for 192.168.101.128/26 host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:34.445597 containerd[1569]: 2025-09-16 04:54:34.399 [INFO][4790] ipam/ipam.go 158: Attempting to load block cidr=192.168.101.128/26 host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:34.445597 containerd[1569]: 2025-09-16 04:54:34.401 [INFO][4790] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.101.128/26 host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:34.445597 containerd[1569]: 2025-09-16 04:54:34.401 [INFO][4790] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.101.128/26 handle="k8s-pod-network.48ceeef0ecdf13074d5514c20f6d7d67cd11526bfe4bdd2c2785442136da918a" host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:34.445597 containerd[1569]: 2025-09-16 04:54:34.403 [INFO][4790] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.48ceeef0ecdf13074d5514c20f6d7d67cd11526bfe4bdd2c2785442136da918a Sep 16 04:54:34.445597 containerd[1569]: 2025-09-16 04:54:34.407 [INFO][4790] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.101.128/26 handle="k8s-pod-network.48ceeef0ecdf13074d5514c20f6d7d67cd11526bfe4bdd2c2785442136da918a" host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:34.445597 containerd[1569]: 2025-09-16 04:54:34.416 [INFO][4790] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.101.136/26] block=192.168.101.128/26 handle="k8s-pod-network.48ceeef0ecdf13074d5514c20f6d7d67cd11526bfe4bdd2c2785442136da918a" host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:34.445597 containerd[1569]: 2025-09-16 04:54:34.416 [INFO][4790] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.101.136/26] handle="k8s-pod-network.48ceeef0ecdf13074d5514c20f6d7d67cd11526bfe4bdd2c2785442136da918a" host="ci-4459-0-0-n-200d586c0a" Sep 16 04:54:34.445597 containerd[1569]: 2025-09-16 04:54:34.416 [INFO][4790] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:54:34.445597 containerd[1569]: 2025-09-16 04:54:34.416 [INFO][4790] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.101.136/26] IPv6=[] ContainerID="48ceeef0ecdf13074d5514c20f6d7d67cd11526bfe4bdd2c2785442136da918a" HandleID="k8s-pod-network.48ceeef0ecdf13074d5514c20f6d7d67cd11526bfe4bdd2c2785442136da918a" Workload="ci--4459--0--0--n--200d586c0a-k8s-csi--node--driver--9nfwp-eth0" Sep 16 04:54:34.446109 containerd[1569]: 2025-09-16 04:54:34.419 [INFO][4769] cni-plugin/k8s.go 418: Populated endpoint ContainerID="48ceeef0ecdf13074d5514c20f6d7d67cd11526bfe4bdd2c2785442136da918a" Namespace="calico-system" Pod="csi-node-driver-9nfwp" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-csi--node--driver--9nfwp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--0--0--n--200d586c0a-k8s-csi--node--driver--9nfwp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fb1bf5b8-6a81-4981-94ca-ffc8e11d2e06", ResourceVersion:"684", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 54, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-0-0-n-200d586c0a", ContainerID:"", Pod:"csi-node-driver-9nfwp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.101.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie05df76a772", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:54:34.446109 containerd[1569]: 2025-09-16 04:54:34.419 [INFO][4769] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.101.136/32] ContainerID="48ceeef0ecdf13074d5514c20f6d7d67cd11526bfe4bdd2c2785442136da918a" Namespace="calico-system" Pod="csi-node-driver-9nfwp" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-csi--node--driver--9nfwp-eth0" Sep 16 04:54:34.446109 containerd[1569]: 2025-09-16 04:54:34.419 [INFO][4769] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie05df76a772 ContainerID="48ceeef0ecdf13074d5514c20f6d7d67cd11526bfe4bdd2c2785442136da918a" Namespace="calico-system" Pod="csi-node-driver-9nfwp" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-csi--node--driver--9nfwp-eth0" Sep 16 04:54:34.446109 containerd[1569]: 2025-09-16 04:54:34.422 [INFO][4769] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="48ceeef0ecdf13074d5514c20f6d7d67cd11526bfe4bdd2c2785442136da918a" Namespace="calico-system" Pod="csi-node-driver-9nfwp" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-csi--node--driver--9nfwp-eth0" Sep 16 04:54:34.446109 containerd[1569]: 2025-09-16 04:54:34.422 [INFO][4769] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="48ceeef0ecdf13074d5514c20f6d7d67cd11526bfe4bdd2c2785442136da918a" Namespace="calico-system" Pod="csi-node-driver-9nfwp" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-csi--node--driver--9nfwp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--0--0--n--200d586c0a-k8s-csi--node--driver--9nfwp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fb1bf5b8-6a81-4981-94ca-ffc8e11d2e06", ResourceVersion:"684", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 54, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-0-0-n-200d586c0a", ContainerID:"48ceeef0ecdf13074d5514c20f6d7d67cd11526bfe4bdd2c2785442136da918a", Pod:"csi-node-driver-9nfwp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.101.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie05df76a772", MAC:"82:1f:6a:9f:b1:40", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:54:34.446109 containerd[1569]: 2025-09-16 04:54:34.438 [INFO][4769] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="48ceeef0ecdf13074d5514c20f6d7d67cd11526bfe4bdd2c2785442136da918a" Namespace="calico-system" Pod="csi-node-driver-9nfwp" WorkloadEndpoint="ci--4459--0--0--n--200d586c0a-k8s-csi--node--driver--9nfwp-eth0" Sep 16 04:54:34.478403 containerd[1569]: time="2025-09-16T04:54:34.478093687Z" level=info msg="connecting to shim 48ceeef0ecdf13074d5514c20f6d7d67cd11526bfe4bdd2c2785442136da918a" address="unix:///run/containerd/s/5a0f617d083bf8f6cc4ff4969d076b16b275697623fead6e8ec86fdf98f6184d" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:54:34.502687 systemd[1]: Started cri-containerd-48ceeef0ecdf13074d5514c20f6d7d67cd11526bfe4bdd2c2785442136da918a.scope - libcontainer container 48ceeef0ecdf13074d5514c20f6d7d67cd11526bfe4bdd2c2785442136da918a. Sep 16 04:54:34.562373 containerd[1569]: time="2025-09-16T04:54:34.562298528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9nfwp,Uid:fb1bf5b8-6a81-4981-94ca-ffc8e11d2e06,Namespace:calico-system,Attempt:0,} returns sandbox id \"48ceeef0ecdf13074d5514c20f6d7d67cd11526bfe4bdd2c2785442136da918a\"" Sep 16 04:54:35.608255 systemd-networkd[1479]: calie05df76a772: Gained IPv6LL Sep 16 04:54:37.673311 containerd[1569]: time="2025-09-16T04:54:37.673003901Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:37.674350 containerd[1569]: time="2025-09-16T04:54:37.674331284Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 16 04:54:37.675249 containerd[1569]: time="2025-09-16T04:54:37.675215584Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:37.678530 containerd[1569]: time="2025-09-16T04:54:37.678115840Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:37.679270 containerd[1569]: time="2025-09-16T04:54:37.679213991Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.476778401s" Sep 16 04:54:37.679270 containerd[1569]: time="2025-09-16T04:54:37.679245932Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 16 04:54:37.682132 containerd[1569]: time="2025-09-16T04:54:37.682076056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 16 04:54:37.684282 containerd[1569]: time="2025-09-16T04:54:37.684208490Z" level=info msg="CreateContainer within sandbox \"9a8f6ac56ba90975fb85b32a674547e575e7028417f50fa3d997471e226025c8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 16 04:54:37.695571 containerd[1569]: time="2025-09-16T04:54:37.694856114Z" level=info msg="Container 8848bd96b96bdc1b52140e74409dada58628f5de623f2024cf34e65a6331b58a: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:54:37.704737 containerd[1569]: time="2025-09-16T04:54:37.704685651Z" level=info msg="CreateContainer within sandbox \"9a8f6ac56ba90975fb85b32a674547e575e7028417f50fa3d997471e226025c8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8848bd96b96bdc1b52140e74409dada58628f5de623f2024cf34e65a6331b58a\"" Sep 16 04:54:37.705639 containerd[1569]: time="2025-09-16T04:54:37.705607622Z" level=info msg="StartContainer for \"8848bd96b96bdc1b52140e74409dada58628f5de623f2024cf34e65a6331b58a\"" Sep 16 04:54:37.706421 containerd[1569]: time="2025-09-16T04:54:37.706391413Z" level=info msg="connecting to shim 8848bd96b96bdc1b52140e74409dada58628f5de623f2024cf34e65a6331b58a" address="unix:///run/containerd/s/4714ab23ff973b52a10c728a78f1097632025c1e2ded92719bde742af0a4617a" protocol=ttrpc version=3 Sep 16 04:54:37.740793 systemd[1]: Started cri-containerd-8848bd96b96bdc1b52140e74409dada58628f5de623f2024cf34e65a6331b58a.scope - libcontainer container 8848bd96b96bdc1b52140e74409dada58628f5de623f2024cf34e65a6331b58a. Sep 16 04:54:37.796580 containerd[1569]: time="2025-09-16T04:54:37.796483042Z" level=info msg="StartContainer for \"8848bd96b96bdc1b52140e74409dada58628f5de623f2024cf34e65a6331b58a\" returns successfully" Sep 16 04:54:38.545035 kubelet[2749]: I0916 04:54:38.544942 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-bd849688-f577h" podStartSLOduration=5.913649986 podStartE2EDuration="11.544920339s" podCreationTimestamp="2025-09-16 04:54:27 +0000 UTC" firstStartedPulling="2025-09-16 04:54:28.570193342 +0000 UTC m=+38.371244547" lastFinishedPulling="2025-09-16 04:54:34.201463684 +0000 UTC m=+44.002514900" observedRunningTime="2025-09-16 04:54:34.523956651 +0000 UTC m=+44.325007858" watchObservedRunningTime="2025-09-16 04:54:38.544920339 +0000 UTC m=+48.345971555" Sep 16 04:54:38.545483 kubelet[2749]: I0916 04:54:38.545111 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-586c95c678-wshk4" podStartSLOduration=26.692355996 podStartE2EDuration="32.545092452s" podCreationTimestamp="2025-09-16 04:54:06 +0000 UTC" firstStartedPulling="2025-09-16 04:54:31.828623385 +0000 UTC m=+41.629674591" lastFinishedPulling="2025-09-16 04:54:37.681359841 +0000 UTC m=+47.482411047" observedRunningTime="2025-09-16 04:54:38.544658106 +0000 UTC m=+48.345709312" watchObservedRunningTime="2025-09-16 04:54:38.545092452 +0000 UTC m=+48.346143668" Sep 16 04:54:42.310984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount898675170.mount: Deactivated successfully. Sep 16 04:54:42.629509 containerd[1569]: time="2025-09-16T04:54:42.629434940Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:42.630928 containerd[1569]: time="2025-09-16T04:54:42.630897365Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 16 04:54:42.631859 containerd[1569]: time="2025-09-16T04:54:42.631812103Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:42.633571 containerd[1569]: time="2025-09-16T04:54:42.633534225Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:42.634297 containerd[1569]: time="2025-09-16T04:54:42.633959734Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 4.951853771s" Sep 16 04:54:42.634297 containerd[1569]: time="2025-09-16T04:54:42.633988668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 16 04:54:42.634874 containerd[1569]: time="2025-09-16T04:54:42.634839154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 16 04:54:42.643300 containerd[1569]: time="2025-09-16T04:54:42.643276090Z" level=info msg="CreateContainer within sandbox \"c2f4c514494320170e8343c8e4688ed2006813631c537acf18a175430027b523\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 16 04:54:42.652925 containerd[1569]: time="2025-09-16T04:54:42.652839892Z" level=info msg="Container 7d821d5b5e4a02c17ba4c8f85380fbfcd9a132a33d23cb533318bfe154eb326d: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:54:42.666650 containerd[1569]: time="2025-09-16T04:54:42.666613735Z" level=info msg="CreateContainer within sandbox \"c2f4c514494320170e8343c8e4688ed2006813631c537acf18a175430027b523\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"7d821d5b5e4a02c17ba4c8f85380fbfcd9a132a33d23cb533318bfe154eb326d\"" Sep 16 04:54:42.667327 containerd[1569]: time="2025-09-16T04:54:42.667308439Z" level=info msg="StartContainer for \"7d821d5b5e4a02c17ba4c8f85380fbfcd9a132a33d23cb533318bfe154eb326d\"" Sep 16 04:54:42.668270 containerd[1569]: time="2025-09-16T04:54:42.668248173Z" level=info msg="connecting to shim 7d821d5b5e4a02c17ba4c8f85380fbfcd9a132a33d23cb533318bfe154eb326d" address="unix:///run/containerd/s/31d7ab0d4ee0962f12972db79d1b7e1752c337077165da4b2a8bc8686eebf08a" protocol=ttrpc version=3 Sep 16 04:54:42.712224 systemd[1]: Started cri-containerd-7d821d5b5e4a02c17ba4c8f85380fbfcd9a132a33d23cb533318bfe154eb326d.scope - libcontainer container 7d821d5b5e4a02c17ba4c8f85380fbfcd9a132a33d23cb533318bfe154eb326d. Sep 16 04:54:42.762204 containerd[1569]: time="2025-09-16T04:54:42.762158113Z" level=info msg="StartContainer for \"7d821d5b5e4a02c17ba4c8f85380fbfcd9a132a33d23cb533318bfe154eb326d\" returns successfully" Sep 16 04:54:43.156911 containerd[1569]: time="2025-09-16T04:54:43.156837561Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:43.159521 containerd[1569]: time="2025-09-16T04:54:43.157986678Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 16 04:54:43.162578 containerd[1569]: time="2025-09-16T04:54:43.162530435Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 527.663449ms" Sep 16 04:54:43.162685 containerd[1569]: time="2025-09-16T04:54:43.162585569Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 16 04:54:43.165467 containerd[1569]: time="2025-09-16T04:54:43.164806407Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 16 04:54:43.172760 containerd[1569]: time="2025-09-16T04:54:43.172706906Z" level=info msg="CreateContainer within sandbox \"690cf88fd665c68d482306766a6ab8ea9a92e5273dbbba2408517eb239bb25a4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 16 04:54:43.182355 containerd[1569]: time="2025-09-16T04:54:43.182316362Z" level=info msg="Container e6703c92ef83c5e3ac718c0f6968d568d4080ffcc671e2cfe9d3bc2a5e3fb0d6: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:54:43.198397 containerd[1569]: time="2025-09-16T04:54:43.198329696Z" level=info msg="CreateContainer within sandbox \"690cf88fd665c68d482306766a6ab8ea9a92e5273dbbba2408517eb239bb25a4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e6703c92ef83c5e3ac718c0f6968d568d4080ffcc671e2cfe9d3bc2a5e3fb0d6\"" Sep 16 04:54:43.199227 containerd[1569]: time="2025-09-16T04:54:43.199163552Z" level=info msg="StartContainer for \"e6703c92ef83c5e3ac718c0f6968d568d4080ffcc671e2cfe9d3bc2a5e3fb0d6\"" Sep 16 04:54:43.200851 containerd[1569]: time="2025-09-16T04:54:43.200481425Z" level=info msg="connecting to shim e6703c92ef83c5e3ac718c0f6968d568d4080ffcc671e2cfe9d3bc2a5e3fb0d6" address="unix:///run/containerd/s/c600837e53c3fa08229685a386fb72ba497c97e87278a21d631022f69f7f8cc4" protocol=ttrpc version=3 Sep 16 04:54:43.231659 systemd[1]: Started cri-containerd-e6703c92ef83c5e3ac718c0f6968d568d4080ffcc671e2cfe9d3bc2a5e3fb0d6.scope - libcontainer container e6703c92ef83c5e3ac718c0f6968d568d4080ffcc671e2cfe9d3bc2a5e3fb0d6. Sep 16 04:54:43.295230 containerd[1569]: time="2025-09-16T04:54:43.295144929Z" level=info msg="StartContainer for \"e6703c92ef83c5e3ac718c0f6968d568d4080ffcc671e2cfe9d3bc2a5e3fb0d6\" returns successfully" Sep 16 04:54:43.621423 kubelet[2749]: I0916 04:54:43.608721 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-586c95c678-xqkfq" podStartSLOduration=27.28833495 podStartE2EDuration="37.602716205s" podCreationTimestamp="2025-09-16 04:54:06 +0000 UTC" firstStartedPulling="2025-09-16 04:54:32.849942568 +0000 UTC m=+42.650993773" lastFinishedPulling="2025-09-16 04:54:43.164323811 +0000 UTC m=+52.965375028" observedRunningTime="2025-09-16 04:54:43.595794955 +0000 UTC m=+53.396846161" watchObservedRunningTime="2025-09-16 04:54:43.602716205 +0000 UTC m=+53.403767411" Sep 16 04:54:43.624342 kubelet[2749]: I0916 04:54:43.623897 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-725zd" podStartSLOduration=23.881904925 podStartE2EDuration="34.62388147s" podCreationTimestamp="2025-09-16 04:54:09 +0000 UTC" firstStartedPulling="2025-09-16 04:54:31.892710002 +0000 UTC m=+41.693761209" lastFinishedPulling="2025-09-16 04:54:42.634686547 +0000 UTC m=+52.435737754" observedRunningTime="2025-09-16 04:54:43.622618821 +0000 UTC m=+53.423670026" watchObservedRunningTime="2025-09-16 04:54:43.62388147 +0000 UTC m=+53.424932666" Sep 16 04:54:43.852400 containerd[1569]: time="2025-09-16T04:54:43.852335213Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7d821d5b5e4a02c17ba4c8f85380fbfcd9a132a33d23cb533318bfe154eb326d\" id:\"0b56e9be847aa4a4ea4e82a02e1adb88658385ad98d9b16bae4d2dcdbe1aaea7\" pid:5003 exit_status:1 exited_at:{seconds:1757998483 nanos:815527187}" Sep 16 04:54:44.593614 kubelet[2749]: I0916 04:54:44.593559 2749 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 16 04:54:44.850041 containerd[1569]: time="2025-09-16T04:54:44.849948597Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7d821d5b5e4a02c17ba4c8f85380fbfcd9a132a33d23cb533318bfe154eb326d\" id:\"42755171ad945d96befe99bbef9c04a4fbe31b8df35cf09f012ccf5f2514a675\" pid:5033 exit_status:1 exited_at:{seconds:1757998484 nanos:849472995}" Sep 16 04:54:45.681150 containerd[1569]: time="2025-09-16T04:54:45.681051603Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7d821d5b5e4a02c17ba4c8f85380fbfcd9a132a33d23cb533318bfe154eb326d\" id:\"dcb7a61ffc13d7bb7da334048088bbdb59d6419c798bc0ddae054ee59105243a\" pid:5057 exit_status:1 exited_at:{seconds:1757998485 nanos:680561463}" Sep 16 04:54:46.422179 containerd[1569]: time="2025-09-16T04:54:46.422120375Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:46.423055 containerd[1569]: time="2025-09-16T04:54:46.423019222Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 16 04:54:46.423998 containerd[1569]: time="2025-09-16T04:54:46.423958375Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:46.437982 containerd[1569]: time="2025-09-16T04:54:46.437933680Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:46.438681 containerd[1569]: time="2025-09-16T04:54:46.438301521Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 3.273452854s" Sep 16 04:54:46.438681 containerd[1569]: time="2025-09-16T04:54:46.438328671Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 16 04:54:46.447528 containerd[1569]: time="2025-09-16T04:54:46.447073903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 16 04:54:46.493548 containerd[1569]: time="2025-09-16T04:54:46.490681164Z" level=info msg="CreateContainer within sandbox \"cd4d5f26568fcdbbabd7af7c3161f51d0aac6147e989b3748cccd59a5a1c78fc\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 16 04:54:46.503533 containerd[1569]: time="2025-09-16T04:54:46.500873250Z" level=info msg="Container db36bbfea3db97ad34a532049e34063a733d2d2db1f876dc2c2f47f92b82c022: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:54:46.519467 containerd[1569]: time="2025-09-16T04:54:46.519426648Z" level=info msg="CreateContainer within sandbox \"cd4d5f26568fcdbbabd7af7c3161f51d0aac6147e989b3748cccd59a5a1c78fc\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"db36bbfea3db97ad34a532049e34063a733d2d2db1f876dc2c2f47f92b82c022\"" Sep 16 04:54:46.521544 containerd[1569]: time="2025-09-16T04:54:46.521044654Z" level=info msg="StartContainer for \"db36bbfea3db97ad34a532049e34063a733d2d2db1f876dc2c2f47f92b82c022\"" Sep 16 04:54:46.524620 containerd[1569]: time="2025-09-16T04:54:46.524561734Z" level=info msg="connecting to shim db36bbfea3db97ad34a532049e34063a733d2d2db1f876dc2c2f47f92b82c022" address="unix:///run/containerd/s/c0868d0f3436344bc516a7e627464d37d2a857a1ec03486bd45f4ea208a0fbfc" protocol=ttrpc version=3 Sep 16 04:54:46.574723 systemd[1]: Started cri-containerd-db36bbfea3db97ad34a532049e34063a733d2d2db1f876dc2c2f47f92b82c022.scope - libcontainer container db36bbfea3db97ad34a532049e34063a733d2d2db1f876dc2c2f47f92b82c022. Sep 16 04:54:46.639601 containerd[1569]: time="2025-09-16T04:54:46.639555799Z" level=info msg="StartContainer for \"db36bbfea3db97ad34a532049e34063a733d2d2db1f876dc2c2f47f92b82c022\" returns successfully" Sep 16 04:54:47.626100 kubelet[2749]: I0916 04:54:47.625615 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-79f564b968-87w2g" podStartSLOduration=25.103396616 podStartE2EDuration="38.625597773s" podCreationTimestamp="2025-09-16 04:54:09 +0000 UTC" firstStartedPulling="2025-09-16 04:54:32.916819662 +0000 UTC m=+42.717870869" lastFinishedPulling="2025-09-16 04:54:46.43902082 +0000 UTC m=+56.240072026" observedRunningTime="2025-09-16 04:54:47.620950201 +0000 UTC m=+57.422001408" watchObservedRunningTime="2025-09-16 04:54:47.625597773 +0000 UTC m=+57.426648979" Sep 16 04:54:47.676440 containerd[1569]: time="2025-09-16T04:54:47.676409019Z" level=info msg="TaskExit event in podsandbox handler container_id:\"db36bbfea3db97ad34a532049e34063a733d2d2db1f876dc2c2f47f92b82c022\" id:\"12c40603d5466abbe0fed32373cd9e025354acffae06a5610627b8308e68b1d9\" pid:5126 exited_at:{seconds:1757998487 nanos:670966006}" Sep 16 04:54:48.036605 containerd[1569]: time="2025-09-16T04:54:48.036477628Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:48.037444 containerd[1569]: time="2025-09-16T04:54:48.037407012Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 16 04:54:48.038191 containerd[1569]: time="2025-09-16T04:54:48.038154805Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:48.040304 containerd[1569]: time="2025-09-16T04:54:48.039902686Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:48.040304 containerd[1569]: time="2025-09-16T04:54:48.040204491Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.593098589s" Sep 16 04:54:48.040304 containerd[1569]: time="2025-09-16T04:54:48.040226954Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 16 04:54:48.045946 containerd[1569]: time="2025-09-16T04:54:48.045924544Z" level=info msg="CreateContainer within sandbox \"48ceeef0ecdf13074d5514c20f6d7d67cd11526bfe4bdd2c2785442136da918a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 16 04:54:48.066540 containerd[1569]: time="2025-09-16T04:54:48.064884811Z" level=info msg="Container c24958a0501fdd497f3434949df01e8cd9860ec6fa0166b042a6be30f2c35c7b: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:54:48.092272 containerd[1569]: time="2025-09-16T04:54:48.092213240Z" level=info msg="CreateContainer within sandbox \"48ceeef0ecdf13074d5514c20f6d7d67cd11526bfe4bdd2c2785442136da918a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c24958a0501fdd497f3434949df01e8cd9860ec6fa0166b042a6be30f2c35c7b\"" Sep 16 04:54:48.092896 containerd[1569]: time="2025-09-16T04:54:48.092846739Z" level=info msg="StartContainer for \"c24958a0501fdd497f3434949df01e8cd9860ec6fa0166b042a6be30f2c35c7b\"" Sep 16 04:54:48.094639 containerd[1569]: time="2025-09-16T04:54:48.094475545Z" level=info msg="connecting to shim c24958a0501fdd497f3434949df01e8cd9860ec6fa0166b042a6be30f2c35c7b" address="unix:///run/containerd/s/5a0f617d083bf8f6cc4ff4969d076b16b275697623fead6e8ec86fdf98f6184d" protocol=ttrpc version=3 Sep 16 04:54:48.137806 systemd[1]: Started cri-containerd-c24958a0501fdd497f3434949df01e8cd9860ec6fa0166b042a6be30f2c35c7b.scope - libcontainer container c24958a0501fdd497f3434949df01e8cd9860ec6fa0166b042a6be30f2c35c7b. Sep 16 04:54:48.207073 containerd[1569]: time="2025-09-16T04:54:48.207034400Z" level=info msg="StartContainer for \"c24958a0501fdd497f3434949df01e8cd9860ec6fa0166b042a6be30f2c35c7b\" returns successfully" Sep 16 04:54:48.209528 containerd[1569]: time="2025-09-16T04:54:48.209200314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 16 04:54:49.835231 containerd[1569]: time="2025-09-16T04:54:49.835170842Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:49.836285 containerd[1569]: time="2025-09-16T04:54:49.836119752Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 16 04:54:49.837215 containerd[1569]: time="2025-09-16T04:54:49.837189970Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:49.838927 containerd[1569]: time="2025-09-16T04:54:49.838905529Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:49.839398 containerd[1569]: time="2025-09-16T04:54:49.839370121Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 1.630145782s" Sep 16 04:54:49.839933 containerd[1569]: time="2025-09-16T04:54:49.839401430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 16 04:54:49.842645 containerd[1569]: time="2025-09-16T04:54:49.842617714Z" level=info msg="CreateContainer within sandbox \"48ceeef0ecdf13074d5514c20f6d7d67cd11526bfe4bdd2c2785442136da918a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 16 04:54:49.873450 containerd[1569]: time="2025-09-16T04:54:49.872532214Z" level=info msg="Container 10161b9b0a8c1b6cd7d7e4638249d892855da703f213cb3c07d2e0d5a440d087: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:54:49.895818 containerd[1569]: time="2025-09-16T04:54:49.895781686Z" level=info msg="CreateContainer within sandbox \"48ceeef0ecdf13074d5514c20f6d7d67cd11526bfe4bdd2c2785442136da918a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"10161b9b0a8c1b6cd7d7e4638249d892855da703f213cb3c07d2e0d5a440d087\"" Sep 16 04:54:49.897703 containerd[1569]: time="2025-09-16T04:54:49.897670471Z" level=info msg="StartContainer for \"10161b9b0a8c1b6cd7d7e4638249d892855da703f213cb3c07d2e0d5a440d087\"" Sep 16 04:54:49.899254 containerd[1569]: time="2025-09-16T04:54:49.899185072Z" level=info msg="connecting to shim 10161b9b0a8c1b6cd7d7e4638249d892855da703f213cb3c07d2e0d5a440d087" address="unix:///run/containerd/s/5a0f617d083bf8f6cc4ff4969d076b16b275697623fead6e8ec86fdf98f6184d" protocol=ttrpc version=3 Sep 16 04:54:49.938827 systemd[1]: Started cri-containerd-10161b9b0a8c1b6cd7d7e4638249d892855da703f213cb3c07d2e0d5a440d087.scope - libcontainer container 10161b9b0a8c1b6cd7d7e4638249d892855da703f213cb3c07d2e0d5a440d087. Sep 16 04:54:49.983803 containerd[1569]: time="2025-09-16T04:54:49.983769359Z" level=info msg="StartContainer for \"10161b9b0a8c1b6cd7d7e4638249d892855da703f213cb3c07d2e0d5a440d087\" returns successfully" Sep 16 04:54:50.632118 kubelet[2749]: I0916 04:54:50.632011 2749 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 16 04:54:50.632118 kubelet[2749]: I0916 04:54:50.632071 2749 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 16 04:54:50.684520 kubelet[2749]: I0916 04:54:50.683838 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-9nfwp" podStartSLOduration=26.407248034 podStartE2EDuration="41.683810356s" podCreationTimestamp="2025-09-16 04:54:09 +0000 UTC" firstStartedPulling="2025-09-16 04:54:34.563883646 +0000 UTC m=+44.364934853" lastFinishedPulling="2025-09-16 04:54:49.84044597 +0000 UTC m=+59.641497175" observedRunningTime="2025-09-16 04:54:50.681847562 +0000 UTC m=+60.482898768" watchObservedRunningTime="2025-09-16 04:54:50.683810356 +0000 UTC m=+60.484861562" Sep 16 04:54:58.626990 containerd[1569]: time="2025-09-16T04:54:58.626937225Z" level=info msg="TaskExit event in podsandbox handler container_id:\"56f4769d3a970d247123dd173b81d6cfa362dd19c8c3ad19a5a7c33fe4733e05\" id:\"cddc662a8aff0574cf1be91ac4fd581424a8b2409290b7187f5f82772eae7154\" pid:5233 exited_at:{seconds:1757998498 nanos:626588361}" Sep 16 04:55:09.728718 containerd[1569]: time="2025-09-16T04:55:09.728668422Z" level=info msg="TaskExit event in podsandbox handler container_id:\"db36bbfea3db97ad34a532049e34063a733d2d2db1f876dc2c2f47f92b82c022\" id:\"3b3cc142219314e66499c607696f5a990d2f1a5f035432a897b0503e46371ed5\" pid:5265 exited_at:{seconds:1757998509 nanos:728217915}" Sep 16 04:55:12.525232 kubelet[2749]: I0916 04:55:12.525156 2749 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 16 04:55:15.827518 containerd[1569]: time="2025-09-16T04:55:15.827373666Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7d821d5b5e4a02c17ba4c8f85380fbfcd9a132a33d23cb533318bfe154eb326d\" id:\"ce4ac748b75fa7586565737839826dc63a518d056a33d05c35b563b2b17e28e5\" pid:5289 exited_at:{seconds:1757998515 nanos:825407867}" Sep 16 04:55:17.696989 containerd[1569]: time="2025-09-16T04:55:17.696928880Z" level=info msg="TaskExit event in podsandbox handler container_id:\"db36bbfea3db97ad34a532049e34063a733d2d2db1f876dc2c2f47f92b82c022\" id:\"d522352c220b7a533a96f9e73ba69ae45bc189fd46ac75ff563d38147ef7b593\" pid:5314 exited_at:{seconds:1757998517 nanos:678055484}" Sep 16 04:55:27.319834 systemd[1]: Started sshd@7-37.27.208.182:22-139.178.89.65:38396.service - OpenSSH per-connection server daemon (139.178.89.65:38396). Sep 16 04:55:28.501655 sshd[5328]: Accepted publickey for core from 139.178.89.65 port 38396 ssh2: RSA SHA256:ukQ34xonoknF08dP0xLAU5hfihSV0h8HVu+YH+vjyGk Sep 16 04:55:28.505419 sshd-session[5328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:55:28.515297 systemd-logind[1535]: New session 8 of user core. Sep 16 04:55:28.518619 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 16 04:55:28.572219 containerd[1569]: time="2025-09-16T04:55:28.572142985Z" level=info msg="TaskExit event in podsandbox handler container_id:\"56f4769d3a970d247123dd173b81d6cfa362dd19c8c3ad19a5a7c33fe4733e05\" id:\"a6be9769d181f1688ba76547574f43576036ec1ac340b7aa7a76dd650b3e94a4\" pid:5346 exited_at:{seconds:1757998528 nanos:571326086}" Sep 16 04:55:29.808192 sshd[5358]: Connection closed by 139.178.89.65 port 38396 Sep 16 04:55:29.809390 sshd-session[5328]: pam_unix(sshd:session): session closed for user core Sep 16 04:55:29.820683 systemd-logind[1535]: Session 8 logged out. Waiting for processes to exit. Sep 16 04:55:29.820990 systemd[1]: sshd@7-37.27.208.182:22-139.178.89.65:38396.service: Deactivated successfully. Sep 16 04:55:29.823443 systemd[1]: session-8.scope: Deactivated successfully. Sep 16 04:55:29.830944 systemd-logind[1535]: Removed session 8. Sep 16 04:55:34.965630 systemd[1]: Started sshd@8-37.27.208.182:22-139.178.89.65:51882.service - OpenSSH per-connection server daemon (139.178.89.65:51882). Sep 16 04:55:35.997632 sshd[5373]: Accepted publickey for core from 139.178.89.65 port 51882 ssh2: RSA SHA256:ukQ34xonoknF08dP0xLAU5hfihSV0h8HVu+YH+vjyGk Sep 16 04:55:36.001253 sshd-session[5373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:55:36.009923 systemd-logind[1535]: New session 9 of user core. Sep 16 04:55:36.015654 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 16 04:55:36.917520 sshd[5376]: Connection closed by 139.178.89.65 port 51882 Sep 16 04:55:36.918697 sshd-session[5373]: pam_unix(sshd:session): session closed for user core Sep 16 04:55:36.925726 systemd-logind[1535]: Session 9 logged out. Waiting for processes to exit. Sep 16 04:55:36.926828 systemd[1]: sshd@8-37.27.208.182:22-139.178.89.65:51882.service: Deactivated successfully. Sep 16 04:55:36.931416 systemd[1]: session-9.scope: Deactivated successfully. Sep 16 04:55:36.934977 systemd-logind[1535]: Removed session 9. Sep 16 04:55:37.083986 systemd[1]: Started sshd@9-37.27.208.182:22-139.178.89.65:51884.service - OpenSSH per-connection server daemon (139.178.89.65:51884). Sep 16 04:55:38.068678 sshd[5389]: Accepted publickey for core from 139.178.89.65 port 51884 ssh2: RSA SHA256:ukQ34xonoknF08dP0xLAU5hfihSV0h8HVu+YH+vjyGk Sep 16 04:55:38.070533 sshd-session[5389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:55:38.077160 systemd-logind[1535]: New session 10 of user core. Sep 16 04:55:38.084957 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 16 04:55:38.881919 sshd[5397]: Connection closed by 139.178.89.65 port 51884 Sep 16 04:55:38.883822 sshd-session[5389]: pam_unix(sshd:session): session closed for user core Sep 16 04:55:38.887517 systemd-logind[1535]: Session 10 logged out. Waiting for processes to exit. Sep 16 04:55:38.888904 systemd[1]: sshd@9-37.27.208.182:22-139.178.89.65:51884.service: Deactivated successfully. Sep 16 04:55:38.891179 systemd[1]: session-10.scope: Deactivated successfully. Sep 16 04:55:38.893125 systemd-logind[1535]: Removed session 10. Sep 16 04:55:39.051681 systemd[1]: Started sshd@10-37.27.208.182:22-139.178.89.65:51894.service - OpenSSH per-connection server daemon (139.178.89.65:51894). Sep 16 04:55:39.524523 containerd[1569]: time="2025-09-16T04:55:39.524393328Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7d821d5b5e4a02c17ba4c8f85380fbfcd9a132a33d23cb533318bfe154eb326d\" id:\"7e93e401d9afb3c6fb0d8e36ebd7e16dc2d956694aa18f6463b3e723d33a6dc7\" pid:5422 exited_at:{seconds:1757998539 nanos:492710804}" Sep 16 04:55:40.045109 sshd[5407]: Accepted publickey for core from 139.178.89.65 port 51894 ssh2: RSA SHA256:ukQ34xonoknF08dP0xLAU5hfihSV0h8HVu+YH+vjyGk Sep 16 04:55:40.048091 sshd-session[5407]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:55:40.056475 systemd-logind[1535]: New session 11 of user core. Sep 16 04:55:40.060616 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 16 04:55:40.797225 sshd[5432]: Connection closed by 139.178.89.65 port 51894 Sep 16 04:55:40.798384 sshd-session[5407]: pam_unix(sshd:session): session closed for user core Sep 16 04:55:40.804682 systemd[1]: sshd@10-37.27.208.182:22-139.178.89.65:51894.service: Deactivated successfully. Sep 16 04:55:40.807159 systemd[1]: session-11.scope: Deactivated successfully. Sep 16 04:55:40.808920 systemd-logind[1535]: Session 11 logged out. Waiting for processes to exit. Sep 16 04:55:40.811192 systemd-logind[1535]: Removed session 11. Sep 16 04:55:45.697115 containerd[1569]: time="2025-09-16T04:55:45.697058533Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7d821d5b5e4a02c17ba4c8f85380fbfcd9a132a33d23cb533318bfe154eb326d\" id:\"7a2aff2819b25b650211e5d5ade8393011712fc5243c85f094956fc270a06450\" pid:5460 exited_at:{seconds:1757998545 nanos:696645746}" Sep 16 04:55:46.005630 systemd[1]: Started sshd@11-37.27.208.182:22-139.178.89.65:48722.service - OpenSSH per-connection server daemon (139.178.89.65:48722). Sep 16 04:55:47.137881 sshd[5472]: Accepted publickey for core from 139.178.89.65 port 48722 ssh2: RSA SHA256:ukQ34xonoknF08dP0xLAU5hfihSV0h8HVu+YH+vjyGk Sep 16 04:55:47.141843 sshd-session[5472]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:55:47.152554 systemd-logind[1535]: New session 12 of user core. Sep 16 04:55:47.164748 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 16 04:55:47.675713 containerd[1569]: time="2025-09-16T04:55:47.675627527Z" level=info msg="TaskExit event in podsandbox handler container_id:\"db36bbfea3db97ad34a532049e34063a733d2d2db1f876dc2c2f47f92b82c022\" id:\"55d8f2649e46e188b4a72ffea0b162dcf3d80ecceced03b82e2443b0f6cad015\" pid:5487 exited_at:{seconds:1757998547 nanos:675305223}" Sep 16 04:55:47.991690 sshd[5475]: Connection closed by 139.178.89.65 port 48722 Sep 16 04:55:47.996555 sshd-session[5472]: pam_unix(sshd:session): session closed for user core Sep 16 04:55:48.001596 systemd[1]: sshd@11-37.27.208.182:22-139.178.89.65:48722.service: Deactivated successfully. Sep 16 04:55:48.004192 systemd[1]: session-12.scope: Deactivated successfully. Sep 16 04:55:48.005368 systemd-logind[1535]: Session 12 logged out. Waiting for processes to exit. Sep 16 04:55:48.006882 systemd-logind[1535]: Removed session 12. Sep 16 04:55:53.154804 systemd[1]: Started sshd@12-37.27.208.182:22-139.178.89.65:60648.service - OpenSSH per-connection server daemon (139.178.89.65:60648). Sep 16 04:55:54.182112 sshd[5515]: Accepted publickey for core from 139.178.89.65 port 60648 ssh2: RSA SHA256:ukQ34xonoknF08dP0xLAU5hfihSV0h8HVu+YH+vjyGk Sep 16 04:55:54.184923 sshd-session[5515]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:55:54.190821 systemd-logind[1535]: New session 13 of user core. Sep 16 04:55:54.197638 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 16 04:55:54.958618 sshd[5518]: Connection closed by 139.178.89.65 port 60648 Sep 16 04:55:54.959371 sshd-session[5515]: pam_unix(sshd:session): session closed for user core Sep 16 04:55:54.963566 systemd[1]: sshd@12-37.27.208.182:22-139.178.89.65:60648.service: Deactivated successfully. Sep 16 04:55:54.965762 systemd[1]: session-13.scope: Deactivated successfully. Sep 16 04:55:54.968162 systemd-logind[1535]: Session 13 logged out. Waiting for processes to exit. Sep 16 04:55:54.969665 systemd-logind[1535]: Removed session 13. Sep 16 04:55:58.635074 containerd[1569]: time="2025-09-16T04:55:58.634995131Z" level=info msg="TaskExit event in podsandbox handler container_id:\"56f4769d3a970d247123dd173b81d6cfa362dd19c8c3ad19a5a7c33fe4733e05\" id:\"ab714a52a8186456ad960e12fa62a8f97060102e3b9c66e1fbfb7ef03a38855f\" pid:5546 exited_at:{seconds:1757998558 nanos:634645357}" Sep 16 04:56:00.164690 systemd[1]: Started sshd@13-37.27.208.182:22-139.178.89.65:48070.service - OpenSSH per-connection server daemon (139.178.89.65:48070). Sep 16 04:56:01.326059 sshd[5558]: Accepted publickey for core from 139.178.89.65 port 48070 ssh2: RSA SHA256:ukQ34xonoknF08dP0xLAU5hfihSV0h8HVu+YH+vjyGk Sep 16 04:56:01.330160 sshd-session[5558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:56:01.338603 systemd-logind[1535]: New session 14 of user core. Sep 16 04:56:01.344662 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 16 04:56:02.303716 sshd[5561]: Connection closed by 139.178.89.65 port 48070 Sep 16 04:56:02.305690 sshd-session[5558]: pam_unix(sshd:session): session closed for user core Sep 16 04:56:02.314197 systemd-logind[1535]: Session 14 logged out. Waiting for processes to exit. Sep 16 04:56:02.314892 systemd[1]: sshd@13-37.27.208.182:22-139.178.89.65:48070.service: Deactivated successfully. Sep 16 04:56:02.319241 systemd[1]: session-14.scope: Deactivated successfully. Sep 16 04:56:02.324246 systemd-logind[1535]: Removed session 14. Sep 16 04:56:02.457767 systemd[1]: Started sshd@14-37.27.208.182:22-139.178.89.65:48074.service - OpenSSH per-connection server daemon (139.178.89.65:48074). Sep 16 04:56:03.467122 sshd[5573]: Accepted publickey for core from 139.178.89.65 port 48074 ssh2: RSA SHA256:ukQ34xonoknF08dP0xLAU5hfihSV0h8HVu+YH+vjyGk Sep 16 04:56:03.473108 sshd-session[5573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:56:03.479304 systemd-logind[1535]: New session 15 of user core. Sep 16 04:56:03.485732 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 16 04:56:04.441061 sshd[5583]: Connection closed by 139.178.89.65 port 48074 Sep 16 04:56:04.441358 sshd-session[5573]: pam_unix(sshd:session): session closed for user core Sep 16 04:56:04.447703 systemd[1]: sshd@14-37.27.208.182:22-139.178.89.65:48074.service: Deactivated successfully. Sep 16 04:56:04.449806 systemd[1]: session-15.scope: Deactivated successfully. Sep 16 04:56:04.452185 systemd-logind[1535]: Session 15 logged out. Waiting for processes to exit. Sep 16 04:56:04.454556 systemd-logind[1535]: Removed session 15. Sep 16 04:56:04.645041 systemd[1]: Started sshd@15-37.27.208.182:22-139.178.89.65:48080.service - OpenSSH per-connection server daemon (139.178.89.65:48080). Sep 16 04:56:05.768024 sshd[5594]: Accepted publickey for core from 139.178.89.65 port 48080 ssh2: RSA SHA256:ukQ34xonoknF08dP0xLAU5hfihSV0h8HVu+YH+vjyGk Sep 16 04:56:05.769425 sshd-session[5594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:56:05.775540 systemd-logind[1535]: New session 16 of user core. Sep 16 04:56:05.781677 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 16 04:56:07.313592 sshd[5597]: Connection closed by 139.178.89.65 port 48080 Sep 16 04:56:07.315677 sshd-session[5594]: pam_unix(sshd:session): session closed for user core Sep 16 04:56:07.320355 systemd[1]: sshd@15-37.27.208.182:22-139.178.89.65:48080.service: Deactivated successfully. Sep 16 04:56:07.323088 systemd[1]: session-16.scope: Deactivated successfully. Sep 16 04:56:07.324666 systemd-logind[1535]: Session 16 logged out. Waiting for processes to exit. Sep 16 04:56:07.326613 systemd-logind[1535]: Removed session 16. Sep 16 04:56:07.466880 systemd[1]: Started sshd@16-37.27.208.182:22-139.178.89.65:48090.service - OpenSSH per-connection server daemon (139.178.89.65:48090). Sep 16 04:56:08.465149 sshd[5631]: Accepted publickey for core from 139.178.89.65 port 48090 ssh2: RSA SHA256:ukQ34xonoknF08dP0xLAU5hfihSV0h8HVu+YH+vjyGk Sep 16 04:56:08.466915 sshd-session[5631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:56:08.475542 systemd-logind[1535]: New session 17 of user core. Sep 16 04:56:08.487834 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 16 04:56:09.742196 sshd[5634]: Connection closed by 139.178.89.65 port 48090 Sep 16 04:56:09.752433 sshd-session[5631]: pam_unix(sshd:session): session closed for user core Sep 16 04:56:09.771733 systemd-logind[1535]: Session 17 logged out. Waiting for processes to exit. Sep 16 04:56:09.771902 systemd[1]: sshd@16-37.27.208.182:22-139.178.89.65:48090.service: Deactivated successfully. Sep 16 04:56:09.778802 systemd[1]: session-17.scope: Deactivated successfully. Sep 16 04:56:09.780753 systemd-logind[1535]: Removed session 17. Sep 16 04:56:09.867980 containerd[1569]: time="2025-09-16T04:56:09.860203181Z" level=info msg="TaskExit event in podsandbox handler container_id:\"db36bbfea3db97ad34a532049e34063a733d2d2db1f876dc2c2f47f92b82c022\" id:\"f64daa45993d551f3cfb0c467ff57026e5657b03260e83411df625e46a19ce39\" pid:5655 exited_at:{seconds:1757998569 nanos:859874759}" Sep 16 04:56:09.907938 systemd[1]: Started sshd@17-37.27.208.182:22-139.178.89.65:48102.service - OpenSSH per-connection server daemon (139.178.89.65:48102). Sep 16 04:56:10.914042 sshd[5666]: Accepted publickey for core from 139.178.89.65 port 48102 ssh2: RSA SHA256:ukQ34xonoknF08dP0xLAU5hfihSV0h8HVu+YH+vjyGk Sep 16 04:56:10.919256 sshd-session[5666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:56:10.928609 systemd-logind[1535]: New session 18 of user core. Sep 16 04:56:10.932698 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 16 04:56:11.790384 sshd[5671]: Connection closed by 139.178.89.65 port 48102 Sep 16 04:56:11.796812 sshd-session[5666]: pam_unix(sshd:session): session closed for user core Sep 16 04:56:11.806436 systemd-logind[1535]: Session 18 logged out. Waiting for processes to exit. Sep 16 04:56:11.810461 systemd[1]: sshd@17-37.27.208.182:22-139.178.89.65:48102.service: Deactivated successfully. Sep 16 04:56:11.813860 systemd[1]: session-18.scope: Deactivated successfully. Sep 16 04:56:11.821434 systemd-logind[1535]: Removed session 18. Sep 16 04:56:15.780552 containerd[1569]: time="2025-09-16T04:56:15.780319520Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7d821d5b5e4a02c17ba4c8f85380fbfcd9a132a33d23cb533318bfe154eb326d\" id:\"440685a76b58e729ac41220e213cb6110b46d5bafb224e726f242806e83bcf6a\" pid:5694 exited_at:{seconds:1757998575 nanos:778963616}" Sep 16 04:56:16.967908 systemd[1]: Started sshd@18-37.27.208.182:22-139.178.89.65:60512.service - OpenSSH per-connection server daemon (139.178.89.65:60512). Sep 16 04:56:17.665757 containerd[1569]: time="2025-09-16T04:56:17.665692533Z" level=info msg="TaskExit event in podsandbox handler container_id:\"db36bbfea3db97ad34a532049e34063a733d2d2db1f876dc2c2f47f92b82c022\" id:\"9c75f61684cfc6f7fde74e73bcd3b60cef3f559f75f93c9e73cb5a90b975ac2c\" pid:5721 exited_at:{seconds:1757998577 nanos:665037604}" Sep 16 04:56:17.987314 sshd[5705]: Accepted publickey for core from 139.178.89.65 port 60512 ssh2: RSA SHA256:ukQ34xonoknF08dP0xLAU5hfihSV0h8HVu+YH+vjyGk Sep 16 04:56:17.988972 sshd-session[5705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:56:17.993747 systemd-logind[1535]: New session 19 of user core. Sep 16 04:56:17.999688 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 16 04:56:18.832057 sshd[5730]: Connection closed by 139.178.89.65 port 60512 Sep 16 04:56:18.832551 sshd-session[5705]: pam_unix(sshd:session): session closed for user core Sep 16 04:56:18.836453 systemd-logind[1535]: Session 19 logged out. Waiting for processes to exit. Sep 16 04:56:18.837451 systemd[1]: sshd@18-37.27.208.182:22-139.178.89.65:60512.service: Deactivated successfully. Sep 16 04:56:18.841219 systemd[1]: session-19.scope: Deactivated successfully. Sep 16 04:56:18.842598 systemd-logind[1535]: Removed session 19. Sep 16 04:56:24.002038 systemd[1]: Started sshd@19-37.27.208.182:22-139.178.89.65:57640.service - OpenSSH per-connection server daemon (139.178.89.65:57640). Sep 16 04:56:24.998703 sshd[5742]: Accepted publickey for core from 139.178.89.65 port 57640 ssh2: RSA SHA256:ukQ34xonoknF08dP0xLAU5hfihSV0h8HVu+YH+vjyGk Sep 16 04:56:25.000722 sshd-session[5742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:56:25.006425 systemd-logind[1535]: New session 20 of user core. Sep 16 04:56:25.010722 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 16 04:56:25.773285 sshd[5745]: Connection closed by 139.178.89.65 port 57640 Sep 16 04:56:25.774131 sshd-session[5742]: pam_unix(sshd:session): session closed for user core Sep 16 04:56:25.779033 systemd[1]: sshd@19-37.27.208.182:22-139.178.89.65:57640.service: Deactivated successfully. Sep 16 04:56:25.781320 systemd[1]: session-20.scope: Deactivated successfully. Sep 16 04:56:25.782788 systemd-logind[1535]: Session 20 logged out. Waiting for processes to exit. Sep 16 04:56:25.784912 systemd-logind[1535]: Removed session 20. Sep 16 04:56:28.610299 containerd[1569]: time="2025-09-16T04:56:28.610225525Z" level=info msg="TaskExit event in podsandbox handler container_id:\"56f4769d3a970d247123dd173b81d6cfa362dd19c8c3ad19a5a7c33fe4733e05\" id:\"c5f348058b33560da93f4de2ed7330977a3bb42bd18a65538beb6a5ceb994043\" pid:5769 exited_at:{seconds:1757998588 nanos:609238290}"