Jun 21 05:04:11.899623 kernel: Linux version 6.12.34-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jun 20 23:59:04 -00 2025 Jun 21 05:04:11.899675 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d3c0be6f64121476b0313f5d7d7bbd73e21bc1a219aacd38b8006b291898eca1 Jun 21 05:04:11.899692 kernel: BIOS-provided physical RAM map: Jun 21 05:04:11.899700 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Jun 21 05:04:11.899709 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Jun 21 05:04:11.899718 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Jun 21 05:04:11.899729 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Jun 21 05:04:11.899738 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Jun 21 05:04:11.899746 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Jun 21 05:04:11.899761 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Jun 21 05:04:11.899769 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Jun 21 05:04:11.899800 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Jun 21 05:04:11.899809 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Jun 21 05:04:11.899818 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Jun 21 05:04:11.899828 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Jun 21 05:04:11.899838 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Jun 21 05:04:11.899851 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jun 21 05:04:11.899860 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jun 21 05:04:11.899870 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jun 21 05:04:11.899879 kernel: NX (Execute Disable) protection: active Jun 21 05:04:11.899888 kernel: APIC: Static calls initialized Jun 21 05:04:11.899898 kernel: e820: update [mem 0x9a13f018-0x9a148c57] usable ==> usable Jun 21 05:04:11.899913 kernel: e820: update [mem 0x9a102018-0x9a13ee57] usable ==> usable Jun 21 05:04:11.899930 kernel: extended physical RAM map: Jun 21 05:04:11.899943 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Jun 21 05:04:11.899953 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Jun 21 05:04:11.899963 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Jun 21 05:04:11.899983 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Jun 21 05:04:11.899993 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a102017] usable Jun 21 05:04:11.900003 kernel: reserve setup_data: [mem 0x000000009a102018-0x000000009a13ee57] usable Jun 21 05:04:11.900012 kernel: reserve setup_data: [mem 0x000000009a13ee58-0x000000009a13f017] usable Jun 21 05:04:11.900033 kernel: reserve setup_data: [mem 0x000000009a13f018-0x000000009a148c57] usable Jun 21 05:04:11.900048 kernel: reserve setup_data: [mem 0x000000009a148c58-0x000000009b8ecfff] usable Jun 21 05:04:11.900062 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Jun 21 05:04:11.900072 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Jun 21 05:04:11.900088 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Jun 21 05:04:11.900102 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Jun 21 05:04:11.900111 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Jun 21 05:04:11.900124 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Jun 21 05:04:11.900134 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Jun 21 05:04:11.900149 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Jun 21 05:04:11.900167 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jun 21 05:04:11.900177 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jun 21 05:04:11.900186 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jun 21 05:04:11.900199 kernel: efi: EFI v2.7 by EDK II Jun 21 05:04:11.900209 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1af018 RNG=0x9bb73018 Jun 21 05:04:11.900219 kernel: random: crng init done Jun 21 05:04:11.900240 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Jun 21 05:04:11.900250 kernel: secureboot: Secure boot enabled Jun 21 05:04:11.900260 kernel: SMBIOS 2.8 present. Jun 21 05:04:11.900270 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jun 21 05:04:11.900280 kernel: DMI: Memory slots populated: 1/1 Jun 21 05:04:11.900290 kernel: Hypervisor detected: KVM Jun 21 05:04:11.900300 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jun 21 05:04:11.900310 kernel: kvm-clock: using sched offset of 5360756406 cycles Jun 21 05:04:11.900324 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 21 05:04:11.900335 kernel: tsc: Detected 2794.746 MHz processor Jun 21 05:04:11.900344 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 21 05:04:11.900353 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 21 05:04:11.900362 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Jun 21 05:04:11.900372 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jun 21 05:04:11.900382 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 21 05:04:11.900392 kernel: Using GB pages for direct mapping Jun 21 05:04:11.900402 kernel: ACPI: Early table checksum verification disabled Jun 21 05:04:11.900421 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Jun 21 05:04:11.900431 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jun 21 05:04:11.900442 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 05:04:11.900452 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 05:04:11.900473 kernel: ACPI: FACS 0x000000009BBDD000 000040 Jun 21 05:04:11.900483 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 05:04:11.900494 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 05:04:11.900504 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 05:04:11.900515 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 05:04:11.900529 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jun 21 05:04:11.900539 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Jun 21 05:04:11.900549 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Jun 21 05:04:11.900559 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Jun 21 05:04:11.900569 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Jun 21 05:04:11.900579 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Jun 21 05:04:11.900589 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Jun 21 05:04:11.900599 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Jun 21 05:04:11.900609 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Jun 21 05:04:11.900622 kernel: No NUMA configuration found Jun 21 05:04:11.900633 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Jun 21 05:04:11.900643 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] Jun 21 05:04:11.900653 kernel: Zone ranges: Jun 21 05:04:11.900664 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 21 05:04:11.900674 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Jun 21 05:04:11.900684 kernel: Normal empty Jun 21 05:04:11.900693 kernel: Device empty Jun 21 05:04:11.900702 kernel: Movable zone start for each node Jun 21 05:04:11.900715 kernel: Early memory node ranges Jun 21 05:04:11.900725 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Jun 21 05:04:11.900735 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Jun 21 05:04:11.900746 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Jun 21 05:04:11.900755 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Jun 21 05:04:11.900765 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Jun 21 05:04:11.900793 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Jun 21 05:04:11.900803 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 21 05:04:11.900813 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Jun 21 05:04:11.900823 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jun 21 05:04:11.900837 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jun 21 05:04:11.900847 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jun 21 05:04:11.900857 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Jun 21 05:04:11.900877 kernel: ACPI: PM-Timer IO Port: 0x608 Jun 21 05:04:11.900887 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jun 21 05:04:11.900897 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jun 21 05:04:11.900912 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jun 21 05:04:11.900924 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jun 21 05:04:11.900935 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 21 05:04:11.900948 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jun 21 05:04:11.900958 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jun 21 05:04:11.900968 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 21 05:04:11.900982 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jun 21 05:04:11.900997 kernel: TSC deadline timer available Jun 21 05:04:11.901008 kernel: CPU topo: Max. logical packages: 1 Jun 21 05:04:11.901018 kernel: CPU topo: Max. logical dies: 1 Jun 21 05:04:11.901028 kernel: CPU topo: Max. dies per package: 1 Jun 21 05:04:11.901051 kernel: CPU topo: Max. threads per core: 1 Jun 21 05:04:11.901063 kernel: CPU topo: Num. cores per package: 4 Jun 21 05:04:11.901073 kernel: CPU topo: Num. threads per package: 4 Jun 21 05:04:11.901084 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jun 21 05:04:11.901097 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jun 21 05:04:11.901108 kernel: kvm-guest: KVM setup pv remote TLB flush Jun 21 05:04:11.901118 kernel: kvm-guest: setup PV sched yield Jun 21 05:04:11.901129 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jun 21 05:04:11.901140 kernel: Booting paravirtualized kernel on KVM Jun 21 05:04:11.901155 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 21 05:04:11.901165 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jun 21 05:04:11.901176 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jun 21 05:04:11.901186 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jun 21 05:04:11.901197 kernel: pcpu-alloc: [0] 0 1 2 3 Jun 21 05:04:11.901207 kernel: kvm-guest: PV spinlocks enabled Jun 21 05:04:11.901223 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jun 21 05:04:11.901234 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d3c0be6f64121476b0313f5d7d7bbd73e21bc1a219aacd38b8006b291898eca1 Jun 21 05:04:11.901247 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 21 05:04:11.901257 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 21 05:04:11.901266 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 21 05:04:11.901280 kernel: Fallback order for Node 0: 0 Jun 21 05:04:11.901310 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 Jun 21 05:04:11.901342 kernel: Policy zone: DMA32 Jun 21 05:04:11.901364 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 21 05:04:11.901375 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jun 21 05:04:11.901387 kernel: ftrace: allocating 40093 entries in 157 pages Jun 21 05:04:11.901445 kernel: ftrace: allocated 157 pages with 5 groups Jun 21 05:04:11.901469 kernel: Dynamic Preempt: voluntary Jun 21 05:04:11.901480 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 21 05:04:11.901508 kernel: rcu: RCU event tracing is enabled. Jun 21 05:04:11.901539 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jun 21 05:04:11.901575 kernel: Trampoline variant of Tasks RCU enabled. Jun 21 05:04:11.901589 kernel: Rude variant of Tasks RCU enabled. Jun 21 05:04:11.901609 kernel: Tracing variant of Tasks RCU enabled. Jun 21 05:04:11.901641 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 21 05:04:11.901674 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jun 21 05:04:11.901701 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jun 21 05:04:11.901725 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jun 21 05:04:11.901749 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jun 21 05:04:11.901801 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jun 21 05:04:11.901828 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 21 05:04:11.901839 kernel: Console: colour dummy device 80x25 Jun 21 05:04:11.901859 kernel: printk: legacy console [ttyS0] enabled Jun 21 05:04:11.901871 kernel: ACPI: Core revision 20240827 Jun 21 05:04:11.901886 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jun 21 05:04:11.901897 kernel: APIC: Switch to symmetric I/O mode setup Jun 21 05:04:11.901913 kernel: x2apic enabled Jun 21 05:04:11.901929 kernel: APIC: Switched APIC routing to: physical x2apic Jun 21 05:04:11.901961 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jun 21 05:04:11.901996 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jun 21 05:04:11.902010 kernel: kvm-guest: setup PV IPIs Jun 21 05:04:11.902022 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jun 21 05:04:11.902034 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848ddd4e75, max_idle_ns: 440795346320 ns Jun 21 05:04:11.902048 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) Jun 21 05:04:11.902059 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jun 21 05:04:11.902069 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jun 21 05:04:11.902079 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jun 21 05:04:11.902089 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 21 05:04:11.902098 kernel: Spectre V2 : Mitigation: Retpolines Jun 21 05:04:11.902108 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jun 21 05:04:11.902118 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jun 21 05:04:11.902134 kernel: RETBleed: Mitigation: untrained return thunk Jun 21 05:04:11.902148 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jun 21 05:04:11.902160 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jun 21 05:04:11.902172 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jun 21 05:04:11.902184 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jun 21 05:04:11.902195 kernel: x86/bugs: return thunk changed Jun 21 05:04:11.902207 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jun 21 05:04:11.902219 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 21 05:04:11.902230 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 21 05:04:11.902244 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 21 05:04:11.902265 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 21 05:04:11.902282 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jun 21 05:04:11.902292 kernel: Freeing SMP alternatives memory: 32K Jun 21 05:04:11.902311 kernel: pid_max: default: 32768 minimum: 301 Jun 21 05:04:11.902327 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jun 21 05:04:11.902337 kernel: landlock: Up and running. Jun 21 05:04:11.902349 kernel: SELinux: Initializing. Jun 21 05:04:11.902374 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 21 05:04:11.902405 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 21 05:04:11.902428 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jun 21 05:04:11.902453 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jun 21 05:04:11.902489 kernel: ... version: 0 Jun 21 05:04:11.902512 kernel: ... bit width: 48 Jun 21 05:04:11.902522 kernel: ... generic registers: 6 Jun 21 05:04:11.902532 kernel: ... value mask: 0000ffffffffffff Jun 21 05:04:11.902541 kernel: ... max period: 00007fffffffffff Jun 21 05:04:11.902550 kernel: ... fixed-purpose events: 0 Jun 21 05:04:11.902562 kernel: ... event mask: 000000000000003f Jun 21 05:04:11.902572 kernel: signal: max sigframe size: 1776 Jun 21 05:04:11.902581 kernel: rcu: Hierarchical SRCU implementation. Jun 21 05:04:11.902591 kernel: rcu: Max phase no-delay instances is 400. Jun 21 05:04:11.902601 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jun 21 05:04:11.902610 kernel: smp: Bringing up secondary CPUs ... Jun 21 05:04:11.902620 kernel: smpboot: x86: Booting SMP configuration: Jun 21 05:04:11.902629 kernel: .... node #0, CPUs: #1 #2 #3 Jun 21 05:04:11.902638 kernel: smp: Brought up 1 node, 4 CPUs Jun 21 05:04:11.902648 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) Jun 21 05:04:11.902661 kernel: Memory: 2409216K/2552216K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54424K init, 2544K bss, 137064K reserved, 0K cma-reserved) Jun 21 05:04:11.902670 kernel: devtmpfs: initialized Jun 21 05:04:11.902679 kernel: x86/mm: Memory block size: 128MB Jun 21 05:04:11.902689 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Jun 21 05:04:11.902699 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Jun 21 05:04:11.902709 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 21 05:04:11.902718 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jun 21 05:04:11.902727 kernel: pinctrl core: initialized pinctrl subsystem Jun 21 05:04:11.902744 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 21 05:04:11.902762 kernel: audit: initializing netlink subsys (disabled) Jun 21 05:04:11.902798 kernel: audit: type=2000 audit(1750482249.274:1): state=initialized audit_enabled=0 res=1 Jun 21 05:04:11.902823 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 21 05:04:11.902833 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 21 05:04:11.902845 kernel: cpuidle: using governor menu Jun 21 05:04:11.902865 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 21 05:04:11.902875 kernel: dca service started, version 1.12.1 Jun 21 05:04:11.902887 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jun 21 05:04:11.902902 kernel: PCI: Using configuration type 1 for base access Jun 21 05:04:11.902912 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 21 05:04:11.902923 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 21 05:04:11.902934 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 21 05:04:11.902945 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 21 05:04:11.902955 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 21 05:04:11.902966 kernel: ACPI: Added _OSI(Module Device) Jun 21 05:04:11.902977 kernel: ACPI: Added _OSI(Processor Device) Jun 21 05:04:11.902994 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 21 05:04:11.903009 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 21 05:04:11.903019 kernel: ACPI: Interpreter enabled Jun 21 05:04:11.903029 kernel: ACPI: PM: (supports S0 S5) Jun 21 05:04:11.903038 kernel: ACPI: Using IOAPIC for interrupt routing Jun 21 05:04:11.903048 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 21 05:04:11.903058 kernel: PCI: Using E820 reservations for host bridge windows Jun 21 05:04:11.903067 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jun 21 05:04:11.903077 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 21 05:04:11.903339 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jun 21 05:04:11.903533 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jun 21 05:04:11.903663 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jun 21 05:04:11.903678 kernel: PCI host bridge to bus 0000:00 Jun 21 05:04:11.903825 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 21 05:04:11.903967 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jun 21 05:04:11.904115 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 21 05:04:11.904257 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jun 21 05:04:11.904372 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jun 21 05:04:11.904510 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jun 21 05:04:11.904633 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 21 05:04:11.904829 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jun 21 05:04:11.904986 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jun 21 05:04:11.905131 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jun 21 05:04:11.905292 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jun 21 05:04:11.905482 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jun 21 05:04:11.905613 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jun 21 05:04:11.905823 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jun 21 05:04:11.906027 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jun 21 05:04:11.906200 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jun 21 05:04:11.906334 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jun 21 05:04:11.906538 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jun 21 05:04:11.906699 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jun 21 05:04:11.906890 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jun 21 05:04:11.907046 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jun 21 05:04:11.907187 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jun 21 05:04:11.907314 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jun 21 05:04:11.907445 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jun 21 05:04:11.907584 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jun 21 05:04:11.907712 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jun 21 05:04:11.907868 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jun 21 05:04:11.908017 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jun 21 05:04:11.908201 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jun 21 05:04:11.908379 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jun 21 05:04:11.908526 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jun 21 05:04:11.908886 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jun 21 05:04:11.909044 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jun 21 05:04:11.909061 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jun 21 05:04:11.909071 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jun 21 05:04:11.909089 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 21 05:04:11.909099 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jun 21 05:04:11.909115 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jun 21 05:04:11.909126 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jun 21 05:04:11.909136 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jun 21 05:04:11.909146 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jun 21 05:04:11.909157 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jun 21 05:04:11.909168 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jun 21 05:04:11.909179 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jun 21 05:04:11.909190 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jun 21 05:04:11.909201 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jun 21 05:04:11.909216 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jun 21 05:04:11.909227 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jun 21 05:04:11.909238 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jun 21 05:04:11.909248 kernel: iommu: Default domain type: Translated Jun 21 05:04:11.909259 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 21 05:04:11.909269 kernel: efivars: Registered efivars operations Jun 21 05:04:11.909280 kernel: PCI: Using ACPI for IRQ routing Jun 21 05:04:11.909290 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 21 05:04:11.909301 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Jun 21 05:04:11.909312 kernel: e820: reserve RAM buffer [mem 0x9a102018-0x9bffffff] Jun 21 05:04:11.909325 kernel: e820: reserve RAM buffer [mem 0x9a13f018-0x9bffffff] Jun 21 05:04:11.909335 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Jun 21 05:04:11.909346 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Jun 21 05:04:11.909518 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jun 21 05:04:11.909670 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jun 21 05:04:11.909848 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jun 21 05:04:11.909864 kernel: vgaarb: loaded Jun 21 05:04:11.909874 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jun 21 05:04:11.909890 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jun 21 05:04:11.909901 kernel: clocksource: Switched to clocksource kvm-clock Jun 21 05:04:11.909912 kernel: VFS: Disk quotas dquot_6.6.0 Jun 21 05:04:11.909923 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 21 05:04:11.909933 kernel: pnp: PnP ACPI init Jun 21 05:04:11.910093 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jun 21 05:04:11.910110 kernel: pnp: PnP ACPI: found 6 devices Jun 21 05:04:11.910120 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 21 05:04:11.910135 kernel: NET: Registered PF_INET protocol family Jun 21 05:04:11.910146 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 21 05:04:11.910157 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jun 21 05:04:11.910168 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 21 05:04:11.910178 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 21 05:04:11.910189 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jun 21 05:04:11.910199 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jun 21 05:04:11.910209 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 21 05:04:11.910219 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 21 05:04:11.910233 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 21 05:04:11.910243 kernel: NET: Registered PF_XDP protocol family Jun 21 05:04:11.910397 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jun 21 05:04:11.910563 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jun 21 05:04:11.910715 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jun 21 05:04:11.910908 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jun 21 05:04:11.911046 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jun 21 05:04:11.911193 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jun 21 05:04:11.911345 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jun 21 05:04:11.911491 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jun 21 05:04:11.911507 kernel: PCI: CLS 0 bytes, default 64 Jun 21 05:04:11.911518 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848ddd4e75, max_idle_ns: 440795346320 ns Jun 21 05:04:11.911528 kernel: Initialise system trusted keyrings Jun 21 05:04:11.911539 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jun 21 05:04:11.911549 kernel: Key type asymmetric registered Jun 21 05:04:11.911566 kernel: Asymmetric key parser 'x509' registered Jun 21 05:04:11.911577 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jun 21 05:04:11.911617 kernel: io scheduler mq-deadline registered Jun 21 05:04:11.911631 kernel: io scheduler kyber registered Jun 21 05:04:11.911642 kernel: io scheduler bfq registered Jun 21 05:04:11.911656 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 21 05:04:11.911671 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jun 21 05:04:11.911693 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jun 21 05:04:11.911704 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jun 21 05:04:11.911719 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 21 05:04:11.911731 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 21 05:04:11.911745 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jun 21 05:04:11.911756 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 21 05:04:11.911767 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 21 05:04:11.911940 kernel: rtc_cmos 00:04: RTC can wake from S4 Jun 21 05:04:11.911959 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jun 21 05:04:11.912123 kernel: rtc_cmos 00:04: registered as rtc0 Jun 21 05:04:11.912265 kernel: rtc_cmos 00:04: setting system clock to 2025-06-21T05:04:11 UTC (1750482251) Jun 21 05:04:11.912409 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jun 21 05:04:11.912425 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jun 21 05:04:11.912436 kernel: efifb: probing for efifb Jun 21 05:04:11.912447 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jun 21 05:04:11.912469 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jun 21 05:04:11.912480 kernel: efifb: scrolling: redraw Jun 21 05:04:11.912491 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jun 21 05:04:11.912503 kernel: Console: switching to colour frame buffer device 160x50 Jun 21 05:04:11.912514 kernel: fb0: EFI VGA frame buffer device Jun 21 05:04:11.912525 kernel: pstore: Using crash dump compression: deflate Jun 21 05:04:11.912546 kernel: pstore: Registered efi_pstore as persistent store backend Jun 21 05:04:11.912560 kernel: NET: Registered PF_INET6 protocol family Jun 21 05:04:11.912571 kernel: Segment Routing with IPv6 Jun 21 05:04:11.912582 kernel: In-situ OAM (IOAM) with IPv6 Jun 21 05:04:11.912593 kernel: NET: Registered PF_PACKET protocol family Jun 21 05:04:11.912607 kernel: Key type dns_resolver registered Jun 21 05:04:11.912618 kernel: IPI shorthand broadcast: enabled Jun 21 05:04:11.912629 kernel: sched_clock: Marking stable (3431003661, 142172462)->(3588781731, -15605608) Jun 21 05:04:11.912641 kernel: registered taskstats version 1 Jun 21 05:04:11.912652 kernel: Loading compiled-in X.509 certificates Jun 21 05:04:11.912663 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.34-flatcar: ec4617d162e00e1890f71f252cdf44036a7b66f7' Jun 21 05:04:11.912674 kernel: Demotion targets for Node 0: null Jun 21 05:04:11.912685 kernel: Key type .fscrypt registered Jun 21 05:04:11.912697 kernel: Key type fscrypt-provisioning registered Jun 21 05:04:11.912712 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 21 05:04:11.912722 kernel: ima: Allocated hash algorithm: sha1 Jun 21 05:04:11.912738 kernel: ima: No architecture policies found Jun 21 05:04:11.912748 kernel: clk: Disabling unused clocks Jun 21 05:04:11.912760 kernel: Warning: unable to open an initial console. Jun 21 05:04:11.912771 kernel: Freeing unused kernel image (initmem) memory: 54424K Jun 21 05:04:11.912804 kernel: Write protecting the kernel read-only data: 24576k Jun 21 05:04:11.912816 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jun 21 05:04:11.912826 kernel: Run /init as init process Jun 21 05:04:11.912842 kernel: with arguments: Jun 21 05:04:11.912853 kernel: /init Jun 21 05:04:11.912864 kernel: with environment: Jun 21 05:04:11.912875 kernel: HOME=/ Jun 21 05:04:11.912886 kernel: TERM=linux Jun 21 05:04:11.912897 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 21 05:04:11.912909 systemd[1]: Successfully made /usr/ read-only. Jun 21 05:04:11.912924 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 21 05:04:11.912942 systemd[1]: Detected virtualization kvm. Jun 21 05:04:11.912953 systemd[1]: Detected architecture x86-64. Jun 21 05:04:11.912965 systemd[1]: Running in initrd. Jun 21 05:04:11.912977 systemd[1]: No hostname configured, using default hostname. Jun 21 05:04:11.912989 systemd[1]: Hostname set to . Jun 21 05:04:11.913001 systemd[1]: Initializing machine ID from VM UUID. Jun 21 05:04:11.913012 systemd[1]: Queued start job for default target initrd.target. Jun 21 05:04:11.913029 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 21 05:04:11.913041 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 21 05:04:11.913053 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 21 05:04:11.913066 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 21 05:04:11.913078 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 21 05:04:11.913090 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 21 05:04:11.913104 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 21 05:04:11.913120 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 21 05:04:11.913132 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 21 05:04:11.913144 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 21 05:04:11.913155 systemd[1]: Reached target paths.target - Path Units. Jun 21 05:04:11.913167 systemd[1]: Reached target slices.target - Slice Units. Jun 21 05:04:11.913179 systemd[1]: Reached target swap.target - Swaps. Jun 21 05:04:11.913191 systemd[1]: Reached target timers.target - Timer Units. Jun 21 05:04:11.913203 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 21 05:04:11.913215 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 21 05:04:11.913230 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 21 05:04:11.913242 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jun 21 05:04:11.913254 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 21 05:04:11.913266 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 21 05:04:11.913277 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 21 05:04:11.913289 systemd[1]: Reached target sockets.target - Socket Units. Jun 21 05:04:11.913301 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 21 05:04:11.913313 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 21 05:04:11.913329 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 21 05:04:11.913341 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jun 21 05:04:11.913353 systemd[1]: Starting systemd-fsck-usr.service... Jun 21 05:04:11.913365 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 21 05:04:11.913378 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 21 05:04:11.913390 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 05:04:11.913401 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 21 05:04:11.913418 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 21 05:04:11.913429 systemd[1]: Finished systemd-fsck-usr.service. Jun 21 05:04:11.913441 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 21 05:04:11.913508 systemd-journald[219]: Collecting audit messages is disabled. Jun 21 05:04:11.913541 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 05:04:11.913553 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 21 05:04:11.913565 systemd-journald[219]: Journal started Jun 21 05:04:11.913590 systemd-journald[219]: Runtime Journal (/run/log/journal/9232ff01ae38467abfb3b63e211435c5) is 6M, max 48.2M, 42.2M free. Jun 21 05:04:11.896281 systemd-modules-load[221]: Inserted module 'overlay' Jun 21 05:04:11.916258 systemd[1]: Started systemd-journald.service - Journal Service. Jun 21 05:04:11.929031 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 21 05:04:11.934244 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 21 05:04:11.934642 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 21 05:04:11.936528 kernel: Bridge firewalling registered Jun 21 05:04:11.938006 systemd-modules-load[221]: Inserted module 'br_netfilter' Jun 21 05:04:11.938903 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 21 05:04:11.939665 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 21 05:04:11.945114 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 21 05:04:11.948189 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 21 05:04:11.952539 systemd-tmpfiles[245]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jun 21 05:04:11.956838 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 21 05:04:11.959595 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 21 05:04:11.974034 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 21 05:04:11.975049 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 21 05:04:11.979318 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 21 05:04:12.093580 dracut-cmdline[257]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d3c0be6f64121476b0313f5d7d7bbd73e21bc1a219aacd38b8006b291898eca1 Jun 21 05:04:12.121667 systemd-resolved[263]: Positive Trust Anchors: Jun 21 05:04:12.121689 systemd-resolved[263]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 21 05:04:12.121726 systemd-resolved[263]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 21 05:04:12.125084 systemd-resolved[263]: Defaulting to hostname 'linux'. Jun 21 05:04:12.126467 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 21 05:04:12.133348 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 21 05:04:12.245832 kernel: SCSI subsystem initialized Jun 21 05:04:12.255820 kernel: Loading iSCSI transport class v2.0-870. Jun 21 05:04:12.266817 kernel: iscsi: registered transport (tcp) Jun 21 05:04:12.290958 kernel: iscsi: registered transport (qla4xxx) Jun 21 05:04:12.291044 kernel: QLogic iSCSI HBA Driver Jun 21 05:04:12.311675 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 21 05:04:12.330442 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 21 05:04:12.331149 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 21 05:04:12.403435 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 21 05:04:12.407395 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 21 05:04:12.480819 kernel: raid6: avx2x4 gen() 28785 MB/s Jun 21 05:04:12.497810 kernel: raid6: avx2x2 gen() 30656 MB/s Jun 21 05:04:12.514918 kernel: raid6: avx2x1 gen() 25850 MB/s Jun 21 05:04:12.514962 kernel: raid6: using algorithm avx2x2 gen() 30656 MB/s Jun 21 05:04:12.533029 kernel: raid6: .... xor() 18873 MB/s, rmw enabled Jun 21 05:04:12.533125 kernel: raid6: using avx2x2 recovery algorithm Jun 21 05:04:12.558825 kernel: xor: automatically using best checksumming function avx Jun 21 05:04:12.745817 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 21 05:04:12.756409 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 21 05:04:12.758522 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 21 05:04:12.799921 systemd-udevd[472]: Using default interface naming scheme 'v255'. Jun 21 05:04:12.805535 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 21 05:04:12.807353 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 21 05:04:12.844013 dracut-pre-trigger[479]: rd.md=0: removing MD RAID activation Jun 21 05:04:12.880001 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 21 05:04:12.882241 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 21 05:04:12.974333 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 21 05:04:12.977879 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 21 05:04:13.021338 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jun 21 05:04:13.024879 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jun 21 05:04:13.030583 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 21 05:04:13.030631 kernel: GPT:9289727 != 19775487 Jun 21 05:04:13.030644 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 21 05:04:13.030656 kernel: GPT:9289727 != 19775487 Jun 21 05:04:13.030667 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 21 05:04:13.030679 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 21 05:04:13.035853 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jun 21 05:04:13.046814 kernel: cryptd: max_cpu_qlen set to 1000 Jun 21 05:04:13.048087 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 21 05:04:13.048243 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 05:04:13.051736 kernel: libata version 3.00 loaded. Jun 21 05:04:13.051291 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 05:04:13.056209 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 05:04:13.060254 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 21 05:04:13.092813 kernel: AES CTR mode by8 optimization enabled Jun 21 05:04:13.106823 kernel: ahci 0000:00:1f.2: version 3.0 Jun 21 05:04:13.107101 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jun 21 05:04:13.109862 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jun 21 05:04:13.110096 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jun 21 05:04:13.110273 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jun 21 05:04:13.112581 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jun 21 05:04:13.114804 kernel: scsi host0: ahci Jun 21 05:04:13.115030 kernel: scsi host1: ahci Jun 21 05:04:13.115213 kernel: scsi host2: ahci Jun 21 05:04:13.116584 kernel: scsi host3: ahci Jun 21 05:04:13.117831 kernel: scsi host4: ahci Jun 21 05:04:13.118543 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 05:04:13.125760 kernel: scsi host5: ahci Jun 21 05:04:13.125980 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 0 Jun 21 05:04:13.125997 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 0 Jun 21 05:04:13.126011 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 0 Jun 21 05:04:13.126024 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 0 Jun 21 05:04:13.126037 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 0 Jun 21 05:04:13.127683 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 0 Jun 21 05:04:13.154053 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 21 05:04:13.166116 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jun 21 05:04:13.176277 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jun 21 05:04:13.176799 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jun 21 05:04:13.182327 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 21 05:04:13.182802 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 21 05:04:13.182855 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 05:04:13.186850 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 05:04:13.200489 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 05:04:13.202691 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 21 05:04:13.212535 disk-uuid[634]: Primary Header is updated. Jun 21 05:04:13.212535 disk-uuid[634]: Secondary Entries is updated. Jun 21 05:04:13.212535 disk-uuid[634]: Secondary Header is updated. Jun 21 05:04:13.216334 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 21 05:04:13.255448 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 05:04:13.433736 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jun 21 05:04:13.433808 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jun 21 05:04:13.433829 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jun 21 05:04:13.433841 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jun 21 05:04:13.434803 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jun 21 05:04:13.435812 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jun 21 05:04:13.436966 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jun 21 05:04:13.436987 kernel: ata3.00: applying bridge limits Jun 21 05:04:13.438060 kernel: ata3.00: configured for UDMA/100 Jun 21 05:04:13.438813 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jun 21 05:04:13.518089 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jun 21 05:04:13.518339 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 21 05:04:13.538859 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jun 21 05:04:13.925614 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 21 05:04:13.927580 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 21 05:04:13.929311 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 21 05:04:13.929769 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 21 05:04:13.931153 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 21 05:04:13.961183 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 21 05:04:14.253817 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 21 05:04:14.254527 disk-uuid[636]: The operation has completed successfully. Jun 21 05:04:14.282729 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 21 05:04:14.282865 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 21 05:04:14.339629 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 21 05:04:14.370342 sh[669]: Success Jun 21 05:04:14.393605 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 21 05:04:14.393649 kernel: device-mapper: uevent: version 1.0.3 Jun 21 05:04:14.393663 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jun 21 05:04:14.402798 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jun 21 05:04:14.432077 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 21 05:04:14.440201 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 21 05:04:14.456985 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 21 05:04:14.466716 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jun 21 05:04:14.466753 kernel: BTRFS: device fsid bfb8168c-5be0-428c-83e7-820ccaf1f8e9 devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (681) Jun 21 05:04:14.469210 kernel: BTRFS info (device dm-0): first mount of filesystem bfb8168c-5be0-428c-83e7-820ccaf1f8e9 Jun 21 05:04:14.469233 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 21 05:04:14.469245 kernel: BTRFS info (device dm-0): using free-space-tree Jun 21 05:04:14.474664 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 21 05:04:14.475695 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jun 21 05:04:14.476871 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 21 05:04:14.478641 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 21 05:04:14.480019 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 21 05:04:14.512809 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (716) Jun 21 05:04:14.515974 kernel: BTRFS info (device vda6): first mount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 05:04:14.515999 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 21 05:04:14.516010 kernel: BTRFS info (device vda6): using free-space-tree Jun 21 05:04:14.523812 kernel: BTRFS info (device vda6): last unmount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 05:04:14.524994 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 21 05:04:14.527246 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 21 05:04:14.659212 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 21 05:04:14.669653 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 21 05:04:14.712953 ignition[763]: Ignition 2.21.0 Jun 21 05:04:14.712968 ignition[763]: Stage: fetch-offline Jun 21 05:04:14.713024 ignition[763]: no configs at "/usr/lib/ignition/base.d" Jun 21 05:04:14.713036 ignition[763]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 21 05:04:14.713173 ignition[763]: parsed url from cmdline: "" Jun 21 05:04:14.713178 ignition[763]: no config URL provided Jun 21 05:04:14.713185 ignition[763]: reading system config file "/usr/lib/ignition/user.ign" Jun 21 05:04:14.713197 ignition[763]: no config at "/usr/lib/ignition/user.ign" Jun 21 05:04:14.713223 ignition[763]: op(1): [started] loading QEMU firmware config module Jun 21 05:04:14.713230 ignition[763]: op(1): executing: "modprobe" "qemu_fw_cfg" Jun 21 05:04:14.732858 ignition[763]: op(1): [finished] loading QEMU firmware config module Jun 21 05:04:14.739629 systemd-networkd[855]: lo: Link UP Jun 21 05:04:14.739638 systemd-networkd[855]: lo: Gained carrier Jun 21 05:04:14.743014 systemd-networkd[855]: Enumeration completed Jun 21 05:04:14.743950 systemd-networkd[855]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 05:04:14.743956 systemd-networkd[855]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 21 05:04:14.743998 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 21 05:04:14.745535 systemd-networkd[855]: eth0: Link UP Jun 21 05:04:14.745539 systemd-networkd[855]: eth0: Gained carrier Jun 21 05:04:14.745549 systemd-networkd[855]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 05:04:14.747663 systemd[1]: Reached target network.target - Network. Jun 21 05:04:14.773826 systemd-networkd[855]: eth0: DHCPv4 address 10.0.0.72/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 21 05:04:14.799357 ignition[763]: parsing config with SHA512: 6986c018798698fe9b9f0c44b6279140a4525ef803bed3db5fb3dee1e176e302004f413b2606d19ae673c07049dbcd2be8265993a6a84ba504bc89bb8ba2690e Jun 21 05:04:14.805481 unknown[763]: fetched base config from "system" Jun 21 05:04:14.805497 unknown[763]: fetched user config from "qemu" Jun 21 05:04:14.807161 ignition[763]: fetch-offline: fetch-offline passed Jun 21 05:04:14.807262 ignition[763]: Ignition finished successfully Jun 21 05:04:14.810343 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 21 05:04:14.811190 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jun 21 05:04:14.812121 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 21 05:04:14.852866 ignition[863]: Ignition 2.21.0 Jun 21 05:04:14.852877 ignition[863]: Stage: kargs Jun 21 05:04:14.852994 ignition[863]: no configs at "/usr/lib/ignition/base.d" Jun 21 05:04:14.853004 ignition[863]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 21 05:04:14.855109 ignition[863]: kargs: kargs passed Jun 21 05:04:14.855161 ignition[863]: Ignition finished successfully Jun 21 05:04:14.859292 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 21 05:04:14.861488 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 21 05:04:14.906813 ignition[872]: Ignition 2.21.0 Jun 21 05:04:14.906821 ignition[872]: Stage: disks Jun 21 05:04:14.907047 ignition[872]: no configs at "/usr/lib/ignition/base.d" Jun 21 05:04:14.907060 ignition[872]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 21 05:04:14.908232 ignition[872]: disks: disks passed Jun 21 05:04:14.908286 ignition[872]: Ignition finished successfully Jun 21 05:04:14.914426 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 21 05:04:14.915256 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 21 05:04:14.917002 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 21 05:04:14.917319 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 21 05:04:14.917668 systemd[1]: Reached target sysinit.target - System Initialization. Jun 21 05:04:14.918170 systemd[1]: Reached target basic.target - Basic System. Jun 21 05:04:14.926214 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 21 05:04:14.971567 systemd-fsck[882]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jun 21 05:04:15.070582 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 21 05:04:15.073344 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 21 05:04:15.547822 kernel: EXT4-fs (vda9): mounted filesystem 6d18c974-0fd6-4e4a-98cf-62524fcf9e99 r/w with ordered data mode. Quota mode: none. Jun 21 05:04:15.549016 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 21 05:04:15.550699 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 21 05:04:15.553420 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 21 05:04:15.555097 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 21 05:04:15.556484 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 21 05:04:15.556528 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 21 05:04:15.556552 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 21 05:04:15.574840 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 21 05:04:15.577099 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 21 05:04:15.583835 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (890) Jun 21 05:04:15.586229 kernel: BTRFS info (device vda6): first mount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 05:04:15.586270 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 21 05:04:15.586282 kernel: BTRFS info (device vda6): using free-space-tree Jun 21 05:04:15.590661 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 21 05:04:15.617471 initrd-setup-root[914]: cut: /sysroot/etc/passwd: No such file or directory Jun 21 05:04:15.623831 initrd-setup-root[921]: cut: /sysroot/etc/group: No such file or directory Jun 21 05:04:15.627903 initrd-setup-root[928]: cut: /sysroot/etc/shadow: No such file or directory Jun 21 05:04:15.632971 initrd-setup-root[935]: cut: /sysroot/etc/gshadow: No such file or directory Jun 21 05:04:15.723478 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 21 05:04:15.725550 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 21 05:04:15.727979 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 21 05:04:15.750049 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 21 05:04:15.751394 kernel: BTRFS info (device vda6): last unmount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 05:04:15.762925 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 21 05:04:15.788089 ignition[1004]: INFO : Ignition 2.21.0 Jun 21 05:04:15.788089 ignition[1004]: INFO : Stage: mount Jun 21 05:04:15.790081 ignition[1004]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 21 05:04:15.790081 ignition[1004]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 21 05:04:15.793964 systemd-networkd[855]: eth0: Gained IPv6LL Jun 21 05:04:15.795535 ignition[1004]: INFO : mount: mount passed Jun 21 05:04:15.795535 ignition[1004]: INFO : Ignition finished successfully Jun 21 05:04:15.797599 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 21 05:04:15.800729 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 21 05:04:16.550686 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 21 05:04:16.571595 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (1016) Jun 21 05:04:16.571642 kernel: BTRFS info (device vda6): first mount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 05:04:16.571655 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 21 05:04:16.573307 kernel: BTRFS info (device vda6): using free-space-tree Jun 21 05:04:16.577067 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 21 05:04:16.635986 ignition[1033]: INFO : Ignition 2.21.0 Jun 21 05:04:16.635986 ignition[1033]: INFO : Stage: files Jun 21 05:04:16.638077 ignition[1033]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 21 05:04:16.638077 ignition[1033]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 21 05:04:16.640802 ignition[1033]: DEBUG : files: compiled without relabeling support, skipping Jun 21 05:04:16.642184 ignition[1033]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 21 05:04:16.642184 ignition[1033]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 21 05:04:16.646451 ignition[1033]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 21 05:04:16.648043 ignition[1033]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 21 05:04:16.650081 unknown[1033]: wrote ssh authorized keys file for user: core Jun 21 05:04:16.651252 ignition[1033]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 21 05:04:16.652733 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jun 21 05:04:16.652733 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jun 21 05:04:16.692942 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 21 05:04:16.829237 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jun 21 05:04:16.829237 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 21 05:04:16.833583 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 21 05:04:16.833583 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 21 05:04:16.833583 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 21 05:04:16.833583 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 21 05:04:16.833583 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 21 05:04:16.833583 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 21 05:04:16.833583 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 21 05:04:16.833583 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 21 05:04:16.848057 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 21 05:04:16.848057 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 21 05:04:16.848057 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 21 05:04:16.848057 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 21 05:04:16.848057 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jun 21 05:04:17.523654 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 21 05:04:18.375738 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 21 05:04:18.375738 ignition[1033]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jun 21 05:04:18.380055 ignition[1033]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 21 05:04:18.387176 ignition[1033]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 21 05:04:18.387176 ignition[1033]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jun 21 05:04:18.387176 ignition[1033]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jun 21 05:04:18.391982 ignition[1033]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 21 05:04:18.394181 ignition[1033]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 21 05:04:18.394181 ignition[1033]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jun 21 05:04:18.394181 ignition[1033]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jun 21 05:04:18.426354 ignition[1033]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jun 21 05:04:18.431998 ignition[1033]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jun 21 05:04:18.434072 ignition[1033]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jun 21 05:04:18.434072 ignition[1033]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jun 21 05:04:18.434072 ignition[1033]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jun 21 05:04:18.434072 ignition[1033]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 21 05:04:18.434072 ignition[1033]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 21 05:04:18.434072 ignition[1033]: INFO : files: files passed Jun 21 05:04:18.434072 ignition[1033]: INFO : Ignition finished successfully Jun 21 05:04:18.435831 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 21 05:04:18.439987 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 21 05:04:18.442087 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 21 05:04:18.463933 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 21 05:04:18.464081 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 21 05:04:18.468039 initrd-setup-root-after-ignition[1062]: grep: /sysroot/oem/oem-release: No such file or directory Jun 21 05:04:18.475075 initrd-setup-root-after-ignition[1064]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 21 05:04:18.475075 initrd-setup-root-after-ignition[1064]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 21 05:04:18.478418 initrd-setup-root-after-ignition[1068]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 21 05:04:18.481308 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 21 05:04:18.484141 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 21 05:04:18.486574 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 21 05:04:18.543157 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 21 05:04:18.544275 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 21 05:04:18.547359 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 21 05:04:18.548043 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 21 05:04:18.548621 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 21 05:04:18.549537 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 21 05:04:18.583822 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 21 05:04:18.588240 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 21 05:04:18.617196 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 21 05:04:18.617932 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 21 05:04:18.618471 systemd[1]: Stopped target timers.target - Timer Units. Jun 21 05:04:18.618860 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 21 05:04:18.619035 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 21 05:04:18.628483 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 21 05:04:18.628854 systemd[1]: Stopped target basic.target - Basic System. Jun 21 05:04:18.631215 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 21 05:04:18.631600 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 21 05:04:18.632145 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 21 05:04:18.632540 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jun 21 05:04:18.633089 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 21 05:04:18.633449 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 21 05:04:18.633833 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 21 05:04:18.634261 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 21 05:04:18.634613 systemd[1]: Stopped target swap.target - Swaps. Jun 21 05:04:18.635086 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 21 05:04:18.635215 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 21 05:04:18.673987 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 21 05:04:18.674626 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 21 05:04:18.675134 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 21 05:04:18.675330 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 21 05:04:18.680361 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 21 05:04:18.680539 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 21 05:04:18.686075 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 21 05:04:18.686189 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 21 05:04:18.688197 systemd[1]: Stopped target paths.target - Path Units. Jun 21 05:04:18.690470 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 21 05:04:18.695914 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 21 05:04:18.699171 systemd[1]: Stopped target slices.target - Slice Units. Jun 21 05:04:18.699820 systemd[1]: Stopped target sockets.target - Socket Units. Jun 21 05:04:18.700379 systemd[1]: iscsid.socket: Deactivated successfully. Jun 21 05:04:18.700540 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 21 05:04:18.703446 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 21 05:04:18.703550 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 21 05:04:18.705597 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 21 05:04:18.705719 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 21 05:04:18.709351 systemd[1]: ignition-files.service: Deactivated successfully. Jun 21 05:04:18.709487 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 21 05:04:18.712452 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 21 05:04:18.713138 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 21 05:04:18.713301 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 21 05:04:18.716639 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 21 05:04:18.721876 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 21 05:04:18.722942 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 21 05:04:18.725234 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 21 05:04:18.726271 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 21 05:04:18.734167 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 21 05:04:18.734333 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 21 05:04:18.749735 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 21 05:04:18.757193 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 21 05:04:18.770745 ignition[1088]: INFO : Ignition 2.21.0 Jun 21 05:04:18.770745 ignition[1088]: INFO : Stage: umount Jun 21 05:04:18.770745 ignition[1088]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 21 05:04:18.770745 ignition[1088]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 21 05:04:18.770745 ignition[1088]: INFO : umount: umount passed Jun 21 05:04:18.770745 ignition[1088]: INFO : Ignition finished successfully Jun 21 05:04:18.757324 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 21 05:04:18.768959 systemd[1]: Stopped target network.target - Network. Jun 21 05:04:18.771881 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 21 05:04:18.771991 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 21 05:04:18.772641 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 21 05:04:18.772689 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 21 05:04:18.773213 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 21 05:04:18.773278 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 21 05:04:18.773630 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 21 05:04:18.773674 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 21 05:04:18.774161 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 21 05:04:18.793511 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 21 05:04:18.796673 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 21 05:04:18.796879 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 21 05:04:18.801414 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jun 21 05:04:18.801763 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 21 05:04:18.801878 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 21 05:04:18.805840 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 21 05:04:18.806134 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 21 05:04:18.806282 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 21 05:04:18.815135 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jun 21 05:04:18.815809 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jun 21 05:04:18.816709 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 21 05:04:18.816753 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 21 05:04:18.818034 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 21 05:04:18.822194 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 21 05:04:18.822267 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 21 05:04:18.822655 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 21 05:04:18.822705 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 21 05:04:18.828428 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 21 05:04:18.828489 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 21 05:04:18.829187 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 21 05:04:18.830747 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 21 05:04:18.864356 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 21 05:04:18.864632 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 21 05:04:18.867134 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 21 05:04:18.867184 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 21 05:04:18.868228 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 21 05:04:18.868304 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 21 05:04:18.868597 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 21 05:04:18.868645 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 21 05:04:18.869539 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 21 05:04:18.869586 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 21 05:04:18.918171 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 21 05:04:18.918243 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 21 05:04:18.933657 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 21 05:04:18.934186 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jun 21 05:04:18.934247 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jun 21 05:04:18.966002 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 21 05:04:18.966058 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 21 05:04:18.971350 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 21 05:04:18.971428 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 05:04:18.975693 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 21 05:04:18.984957 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 21 05:04:19.007507 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 21 05:04:19.007634 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 21 05:04:19.064935 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 21 05:04:19.065087 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 21 05:04:19.067391 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 21 05:04:19.067816 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 21 05:04:19.067879 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 21 05:04:19.069374 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 21 05:04:19.095151 systemd[1]: Switching root. Jun 21 05:04:19.134640 systemd-journald[219]: Journal stopped Jun 21 05:04:20.726286 systemd-journald[219]: Received SIGTERM from PID 1 (systemd). Jun 21 05:04:20.726360 kernel: SELinux: policy capability network_peer_controls=1 Jun 21 05:04:20.726376 kernel: SELinux: policy capability open_perms=1 Jun 21 05:04:20.726387 kernel: SELinux: policy capability extended_socket_class=1 Jun 21 05:04:20.726404 kernel: SELinux: policy capability always_check_network=0 Jun 21 05:04:20.726414 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 21 05:04:20.726425 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 21 05:04:20.726442 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 21 05:04:20.726453 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 21 05:04:20.726464 kernel: SELinux: policy capability userspace_initial_context=0 Jun 21 05:04:20.726481 kernel: audit: type=1403 audit(1750482259.649:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 21 05:04:20.726500 systemd[1]: Successfully loaded SELinux policy in 52.415ms. Jun 21 05:04:20.726514 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.140ms. Jun 21 05:04:20.726528 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 21 05:04:20.726540 systemd[1]: Detected virtualization kvm. Jun 21 05:04:20.726552 systemd[1]: Detected architecture x86-64. Jun 21 05:04:20.726587 systemd[1]: Detected first boot. Jun 21 05:04:20.726605 systemd[1]: Initializing machine ID from VM UUID. Jun 21 05:04:20.726617 zram_generator::config[1134]: No configuration found. Jun 21 05:04:20.726629 kernel: Guest personality initialized and is inactive Jun 21 05:04:20.726643 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jun 21 05:04:20.726683 kernel: Initialized host personality Jun 21 05:04:20.726694 kernel: NET: Registered PF_VSOCK protocol family Jun 21 05:04:20.726706 systemd[1]: Populated /etc with preset unit settings. Jun 21 05:04:20.726718 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jun 21 05:04:20.726730 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 21 05:04:20.726742 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 21 05:04:20.726753 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 21 05:04:20.726767 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 21 05:04:20.726806 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 21 05:04:20.726819 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 21 05:04:20.726831 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 21 05:04:20.726860 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 21 05:04:20.726879 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 21 05:04:20.726891 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 21 05:04:20.726902 systemd[1]: Created slice user.slice - User and Session Slice. Jun 21 05:04:20.726915 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 21 05:04:20.726930 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 21 05:04:20.726941 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 21 05:04:20.726953 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 21 05:04:20.726965 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 21 05:04:20.726978 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 21 05:04:20.726990 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 21 05:04:20.727001 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 21 05:04:20.727015 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 21 05:04:20.727028 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 21 05:04:20.727040 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 21 05:04:20.727052 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 21 05:04:20.727064 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 21 05:04:20.727076 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 21 05:04:20.727088 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 21 05:04:20.727100 systemd[1]: Reached target slices.target - Slice Units. Jun 21 05:04:20.727112 systemd[1]: Reached target swap.target - Swaps. Jun 21 05:04:20.727124 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 21 05:04:20.727138 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 21 05:04:20.727150 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jun 21 05:04:20.727161 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 21 05:04:20.727174 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 21 05:04:20.727185 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 21 05:04:20.727197 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 21 05:04:20.727218 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 21 05:04:20.727230 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 21 05:04:20.727242 systemd[1]: Mounting media.mount - External Media Directory... Jun 21 05:04:20.727256 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 05:04:20.727268 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 21 05:04:20.727281 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 21 05:04:20.727295 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 21 05:04:20.727310 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 21 05:04:20.727326 systemd[1]: Reached target machines.target - Containers. Jun 21 05:04:20.727341 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 21 05:04:20.727356 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 05:04:20.727371 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 21 05:04:20.727383 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 21 05:04:20.727395 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 21 05:04:20.727406 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 21 05:04:20.727418 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 21 05:04:20.727430 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 21 05:04:20.727442 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 21 05:04:20.727454 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 21 05:04:20.727468 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 21 05:04:20.727480 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 21 05:04:20.727492 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 21 05:04:20.727504 systemd[1]: Stopped systemd-fsck-usr.service. Jun 21 05:04:20.727516 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 05:04:20.727528 kernel: fuse: init (API version 7.41) Jun 21 05:04:20.727539 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 21 05:04:20.727561 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 21 05:04:20.727572 kernel: loop: module loaded Jun 21 05:04:20.727587 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 21 05:04:20.727599 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 21 05:04:20.727611 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jun 21 05:04:20.727623 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 21 05:04:20.727635 systemd[1]: verity-setup.service: Deactivated successfully. Jun 21 05:04:20.727649 systemd[1]: Stopped verity-setup.service. Jun 21 05:04:20.727661 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 05:04:20.727675 kernel: ACPI: bus type drm_connector registered Jun 21 05:04:20.727687 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 21 05:04:20.727698 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 21 05:04:20.727712 systemd[1]: Mounted media.mount - External Media Directory. Jun 21 05:04:20.727725 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 21 05:04:20.727736 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 21 05:04:20.727748 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 21 05:04:20.727796 systemd-journald[1205]: Collecting audit messages is disabled. Jun 21 05:04:20.727821 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 21 05:04:20.727833 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 21 05:04:20.727847 systemd-journald[1205]: Journal started Jun 21 05:04:20.727879 systemd-journald[1205]: Runtime Journal (/run/log/journal/9232ff01ae38467abfb3b63e211435c5) is 6M, max 48.2M, 42.2M free. Jun 21 05:04:20.455942 systemd[1]: Queued start job for default target multi-user.target. Jun 21 05:04:20.477423 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jun 21 05:04:20.477930 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 21 05:04:20.731838 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 21 05:04:20.734440 systemd[1]: Started systemd-journald.service - Journal Service. Jun 21 05:04:20.735744 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 21 05:04:20.737617 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 21 05:04:20.737974 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 21 05:04:20.739583 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 21 05:04:20.739904 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 21 05:04:20.741416 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 21 05:04:20.741737 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 21 05:04:20.743509 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 21 05:04:20.743806 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 21 05:04:20.745356 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 21 05:04:20.745636 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 21 05:04:20.747211 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 21 05:04:20.749006 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 21 05:04:20.750740 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 21 05:04:20.752508 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jun 21 05:04:20.773131 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 21 05:04:20.777591 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 21 05:04:20.781941 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 21 05:04:20.783383 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 21 05:04:20.783438 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 21 05:04:20.786821 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jun 21 05:04:20.803938 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 21 05:04:20.805714 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 05:04:20.808143 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 21 05:04:20.811697 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 21 05:04:20.813032 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 21 05:04:20.816231 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 21 05:04:20.817735 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 21 05:04:20.820912 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 21 05:04:20.828834 systemd-journald[1205]: Time spent on flushing to /var/log/journal/9232ff01ae38467abfb3b63e211435c5 is 14.383ms for 1034 entries. Jun 21 05:04:20.828834 systemd-journald[1205]: System Journal (/var/log/journal/9232ff01ae38467abfb3b63e211435c5) is 8M, max 195.6M, 187.6M free. Jun 21 05:04:20.851134 systemd-journald[1205]: Received client request to flush runtime journal. Jun 21 05:04:20.851175 kernel: loop0: detected capacity change from 0 to 113872 Jun 21 05:04:20.825295 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 21 05:04:20.835468 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 21 05:04:20.839830 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 21 05:04:20.845143 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 21 05:04:20.848546 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 21 05:04:20.850538 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 21 05:04:20.857083 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 21 05:04:20.862479 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 21 05:04:20.866467 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jun 21 05:04:20.881810 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 21 05:04:20.891372 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 21 05:04:20.907360 kernel: loop1: detected capacity change from 0 to 146240 Jun 21 05:04:20.904722 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jun 21 05:04:20.910170 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 21 05:04:20.914845 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 21 05:04:20.986856 kernel: loop2: detected capacity change from 0 to 224512 Jun 21 05:04:21.002710 systemd-tmpfiles[1275]: ACLs are not supported, ignoring. Jun 21 05:04:21.002729 systemd-tmpfiles[1275]: ACLs are not supported, ignoring. Jun 21 05:04:21.010525 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 21 05:04:21.014888 kernel: loop3: detected capacity change from 0 to 113872 Jun 21 05:04:21.029822 kernel: loop4: detected capacity change from 0 to 146240 Jun 21 05:04:21.045868 kernel: loop5: detected capacity change from 0 to 224512 Jun 21 05:04:21.052640 (sd-merge)[1280]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jun 21 05:04:21.054224 (sd-merge)[1280]: Merged extensions into '/usr'. Jun 21 05:04:21.144460 systemd[1]: Reload requested from client PID 1253 ('systemd-sysext') (unit systemd-sysext.service)... Jun 21 05:04:21.144480 systemd[1]: Reloading... Jun 21 05:04:21.256816 zram_generator::config[1302]: No configuration found. Jun 21 05:04:21.396153 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 05:04:21.440702 ldconfig[1248]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 21 05:04:21.480432 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 21 05:04:21.480909 systemd[1]: Reloading finished in 335 ms. Jun 21 05:04:21.509030 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 21 05:04:21.510617 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 21 05:04:21.529244 systemd[1]: Starting ensure-sysext.service... Jun 21 05:04:21.531273 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 21 05:04:21.541914 systemd[1]: Reload requested from client PID 1343 ('systemctl') (unit ensure-sysext.service)... Jun 21 05:04:21.541933 systemd[1]: Reloading... Jun 21 05:04:21.650805 zram_generator::config[1369]: No configuration found. Jun 21 05:04:21.659161 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jun 21 05:04:21.659560 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jun 21 05:04:21.660016 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 21 05:04:21.660344 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 21 05:04:21.661316 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 21 05:04:21.661639 systemd-tmpfiles[1344]: ACLs are not supported, ignoring. Jun 21 05:04:21.661786 systemd-tmpfiles[1344]: ACLs are not supported, ignoring. Jun 21 05:04:21.665975 systemd-tmpfiles[1344]: Detected autofs mount point /boot during canonicalization of boot. Jun 21 05:04:21.666049 systemd-tmpfiles[1344]: Skipping /boot Jun 21 05:04:21.678846 systemd-tmpfiles[1344]: Detected autofs mount point /boot during canonicalization of boot. Jun 21 05:04:21.678925 systemd-tmpfiles[1344]: Skipping /boot Jun 21 05:04:21.780017 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 05:04:21.875511 systemd[1]: Reloading finished in 333 ms. Jun 21 05:04:21.897509 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 21 05:04:21.913877 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 21 05:04:21.923819 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 21 05:04:21.926412 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 21 05:04:21.951328 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 21 05:04:21.955548 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 21 05:04:21.959995 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 21 05:04:21.962405 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 21 05:04:21.966091 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 05:04:21.966270 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 05:04:21.968025 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 21 05:04:21.970539 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 21 05:04:21.975855 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 21 05:04:21.977192 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 05:04:21.977311 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 05:04:21.977404 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 05:04:21.980067 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 05:04:21.980252 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 05:04:21.980400 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 05:04:21.980498 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 05:04:21.980585 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 05:04:21.983741 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 05:04:21.983977 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 05:04:21.986008 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 21 05:04:21.987596 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 05:04:21.987871 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 05:04:21.988042 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 05:04:21.992537 systemd[1]: Finished ensure-sysext.service. Jun 21 05:04:21.994060 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 21 05:04:22.003051 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 21 05:04:22.008886 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 21 05:04:22.024003 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 21 05:04:22.025009 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 21 05:04:22.029127 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 21 05:04:22.029421 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 21 05:04:22.031621 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 21 05:04:22.031917 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 21 05:04:22.033709 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 21 05:04:22.034055 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 21 05:04:22.037184 augenrules[1443]: No rules Jun 21 05:04:22.038931 systemd[1]: audit-rules.service: Deactivated successfully. Jun 21 05:04:22.039271 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 21 05:04:22.045326 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 21 05:04:22.049750 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 21 05:04:22.053456 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 21 05:04:22.053536 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 21 05:04:22.056009 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 21 05:04:22.057520 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 21 05:04:22.059810 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 21 05:04:22.059996 systemd-udevd[1416]: Using default interface naming scheme 'v255'. Jun 21 05:04:22.082069 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 21 05:04:22.085897 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 21 05:04:22.158533 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 21 05:04:22.248068 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 21 05:04:22.281479 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 21 05:04:22.285916 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 21 05:04:22.297798 kernel: mousedev: PS/2 mouse device common for all mice Jun 21 05:04:22.314706 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 21 05:04:22.332817 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jun 21 05:04:22.333137 systemd-networkd[1461]: lo: Link UP Jun 21 05:04:22.333149 systemd-networkd[1461]: lo: Gained carrier Jun 21 05:04:22.334711 systemd-networkd[1461]: Enumeration completed Jun 21 05:04:22.334844 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 21 05:04:22.335438 systemd-networkd[1461]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 05:04:22.335453 systemd-networkd[1461]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 21 05:04:22.336311 systemd-networkd[1461]: eth0: Link UP Jun 21 05:04:22.336467 systemd-networkd[1461]: eth0: Gained carrier Jun 21 05:04:22.336488 systemd-networkd[1461]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 05:04:22.340255 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jun 21 05:04:22.342804 kernel: ACPI: button: Power Button [PWRF] Jun 21 05:04:22.344096 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 21 05:04:22.349844 systemd-networkd[1461]: eth0: DHCPv4 address 10.0.0.72/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 21 05:04:22.355935 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jun 21 05:04:22.356185 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jun 21 05:04:22.356353 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jun 21 05:04:22.388842 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jun 21 05:04:22.408421 systemd-resolved[1412]: Positive Trust Anchors: Jun 21 05:04:22.408440 systemd-resolved[1412]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 21 05:04:22.408474 systemd-resolved[1412]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 21 05:04:22.420000 systemd-resolved[1412]: Defaulting to hostname 'linux'. Jun 21 05:04:22.423998 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 21 05:04:22.425331 systemd[1]: Reached target network.target - Network. Jun 21 05:04:22.426281 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 21 05:04:22.442053 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 21 05:04:22.443396 systemd[1]: Reached target sysinit.target - System Initialization. Jun 21 05:04:22.444589 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 21 05:04:22.447874 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 21 05:04:22.449142 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jun 21 05:04:22.450290 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 21 05:04:22.451549 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 21 05:04:22.451579 systemd[1]: Reached target paths.target - Path Units. Jun 21 05:04:22.452510 systemd[1]: Reached target time-set.target - System Time Set. Jun 21 05:04:22.453677 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 21 05:04:22.454857 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 21 05:04:22.456091 systemd[1]: Reached target timers.target - Timer Units. Jun 21 05:04:22.458323 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 21 05:04:22.461879 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 21 05:04:22.466422 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jun 21 05:04:22.471070 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jun 21 05:04:22.472438 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jun 21 05:04:23.690163 systemd-resolved[1412]: Clock change detected. Flushing caches. Jun 21 05:04:23.691599 systemd-timesyncd[1431]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jun 21 05:04:23.691900 systemd-timesyncd[1431]: Initial clock synchronization to Sat 2025-06-21 05:04:23.690112 UTC. Jun 21 05:04:23.703085 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 21 05:04:23.706035 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jun 21 05:04:23.708064 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 21 05:04:23.728989 systemd[1]: Reached target sockets.target - Socket Units. Jun 21 05:04:23.731738 systemd[1]: Reached target basic.target - Basic System. Jun 21 05:04:23.733042 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 21 05:04:23.733089 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 21 05:04:23.737562 kernel: kvm_amd: TSC scaling supported Jun 21 05:04:23.737597 kernel: kvm_amd: Nested Virtualization enabled Jun 21 05:04:23.737622 kernel: kvm_amd: Nested Paging enabled Jun 21 05:04:23.737634 kernel: kvm_amd: LBR virtualization supported Jun 21 05:04:23.738659 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jun 21 05:04:23.738682 kernel: kvm_amd: Virtual GIF supported Jun 21 05:04:23.739681 systemd[1]: Starting containerd.service - containerd container runtime... Jun 21 05:04:23.744213 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 21 05:04:23.752880 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 21 05:04:23.755446 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 21 05:04:23.762186 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 21 05:04:23.763466 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 21 05:04:23.765725 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jun 21 05:04:23.768536 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 21 05:04:23.770334 jq[1531]: false Jun 21 05:04:23.773458 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 21 05:04:23.779276 kernel: EDAC MC: Ver: 3.0.0 Jun 21 05:04:23.775733 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 21 05:04:23.783510 google_oslogin_nss_cache[1533]: oslogin_cache_refresh[1533]: Refreshing passwd entry cache Jun 21 05:04:23.782973 oslogin_cache_refresh[1533]: Refreshing passwd entry cache Jun 21 05:04:23.793299 google_oslogin_nss_cache[1533]: oslogin_cache_refresh[1533]: Failure getting users, quitting Jun 21 05:04:23.793299 google_oslogin_nss_cache[1533]: oslogin_cache_refresh[1533]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 21 05:04:23.793299 google_oslogin_nss_cache[1533]: oslogin_cache_refresh[1533]: Refreshing group entry cache Jun 21 05:04:23.792818 oslogin_cache_refresh[1533]: Failure getting users, quitting Jun 21 05:04:23.792831 oslogin_cache_refresh[1533]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 21 05:04:23.792879 oslogin_cache_refresh[1533]: Refreshing group entry cache Jun 21 05:04:23.798809 google_oslogin_nss_cache[1533]: oslogin_cache_refresh[1533]: Failure getting groups, quitting Jun 21 05:04:23.798868 oslogin_cache_refresh[1533]: Failure getting groups, quitting Jun 21 05:04:23.798951 google_oslogin_nss_cache[1533]: oslogin_cache_refresh[1533]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 21 05:04:23.798993 oslogin_cache_refresh[1533]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 21 05:04:23.808323 extend-filesystems[1532]: Found /dev/vda6 Jun 21 05:04:23.810812 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 21 05:04:23.815904 extend-filesystems[1532]: Found /dev/vda9 Jun 21 05:04:23.817513 extend-filesystems[1532]: Checking size of /dev/vda9 Jun 21 05:04:23.822694 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 21 05:04:23.826505 extend-filesystems[1532]: Resized partition /dev/vda9 Jun 21 05:04:23.827360 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 05:04:23.829712 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 21 05:04:23.830248 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 21 05:04:23.831041 systemd[1]: Starting update-engine.service - Update Engine... Jun 21 05:04:23.833302 extend-filesystems[1557]: resize2fs 1.47.2 (1-Jan-2025) Jun 21 05:04:23.833816 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 21 05:04:23.837261 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 21 05:04:23.839823 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jun 21 05:04:23.841962 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 21 05:04:23.844936 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 21 05:04:23.845931 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jun 21 05:04:23.846195 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jun 21 05:04:23.848252 systemd[1]: motdgen.service: Deactivated successfully. Jun 21 05:04:23.848541 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 21 05:04:23.851307 jq[1560]: true Jun 21 05:04:23.852289 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 21 05:04:23.852895 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 21 05:04:23.867873 (ntainerd)[1565]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 21 05:04:23.928644 dbus-daemon[1529]: [system] SELinux support is enabled Jun 21 05:04:23.928816 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 21 05:04:23.943824 update_engine[1558]: I20250621 05:04:23.875519 1558 main.cc:92] Flatcar Update Engine starting Jun 21 05:04:23.943824 update_engine[1558]: I20250621 05:04:23.931797 1558 update_check_scheduler.cc:74] Next update check in 4m38s Jun 21 05:04:23.942730 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 21 05:04:23.942754 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 21 05:04:23.944312 jq[1564]: true Jun 21 05:04:23.944388 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 21 05:04:23.944406 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 21 05:04:23.946665 systemd[1]: Started update-engine.service - Update Engine. Jun 21 05:04:23.955570 tar[1563]: linux-amd64/LICENSE Jun 21 05:04:23.955570 tar[1563]: linux-amd64/helm Jun 21 05:04:23.951311 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 21 05:04:23.961530 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jun 21 05:04:23.981360 extend-filesystems[1557]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jun 21 05:04:23.981360 extend-filesystems[1557]: old_desc_blocks = 1, new_desc_blocks = 1 Jun 21 05:04:23.981360 extend-filesystems[1557]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jun 21 05:04:23.987510 extend-filesystems[1532]: Resized filesystem in /dev/vda9 Jun 21 05:04:23.989115 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 21 05:04:23.989420 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 21 05:04:24.062903 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 05:04:24.119262 bash[1595]: Updated "/home/core/.ssh/authorized_keys" Jun 21 05:04:24.121837 sshd_keygen[1559]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 21 05:04:24.085998 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 21 05:04:24.089129 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 21 05:04:24.101213 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 21 05:04:24.106363 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 21 05:04:24.121681 locksmithd[1580]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 21 05:04:24.130751 systemd[1]: issuegen.service: Deactivated successfully. Jun 21 05:04:24.131014 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 21 05:04:24.136389 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 21 05:04:24.139325 systemd-logind[1550]: Watching system buttons on /dev/input/event2 (Power Button) Jun 21 05:04:24.139351 systemd-logind[1550]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 21 05:04:24.142592 systemd-logind[1550]: New seat seat0. Jun 21 05:04:24.149466 systemd[1]: Started systemd-logind.service - User Login Management. Jun 21 05:04:24.213037 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 21 05:04:24.218966 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 21 05:04:24.223040 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 21 05:04:24.231466 systemd[1]: Reached target getty.target - Login Prompts. Jun 21 05:04:24.356913 containerd[1565]: time="2025-06-21T05:04:24Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jun 21 05:04:24.360593 containerd[1565]: time="2025-06-21T05:04:24.360551894Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jun 21 05:04:24.373664 containerd[1565]: time="2025-06-21T05:04:24.373609852Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="13.244µs" Jun 21 05:04:24.373664 containerd[1565]: time="2025-06-21T05:04:24.373651951Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jun 21 05:04:24.373772 containerd[1565]: time="2025-06-21T05:04:24.373678300Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jun 21 05:04:24.373970 containerd[1565]: time="2025-06-21T05:04:24.373941063Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jun 21 05:04:24.373970 containerd[1565]: time="2025-06-21T05:04:24.373964497Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jun 21 05:04:24.374015 containerd[1565]: time="2025-06-21T05:04:24.373995485Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 21 05:04:24.374101 containerd[1565]: time="2025-06-21T05:04:24.374076447Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 21 05:04:24.374101 containerd[1565]: time="2025-06-21T05:04:24.374091375Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 21 05:04:24.374474 containerd[1565]: time="2025-06-21T05:04:24.374444398Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 21 05:04:24.374474 containerd[1565]: time="2025-06-21T05:04:24.374463534Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 21 05:04:24.374534 containerd[1565]: time="2025-06-21T05:04:24.374474995Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 21 05:04:24.374534 containerd[1565]: time="2025-06-21T05:04:24.374499671Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jun 21 05:04:24.374649 containerd[1565]: time="2025-06-21T05:04:24.374625007Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jun 21 05:04:24.375090 containerd[1565]: time="2025-06-21T05:04:24.375065964Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 21 05:04:24.375208 containerd[1565]: time="2025-06-21T05:04:24.375115767Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 21 05:04:24.375208 containerd[1565]: time="2025-06-21T05:04:24.375137909Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jun 21 05:04:24.376319 containerd[1565]: time="2025-06-21T05:04:24.376282947Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jun 21 05:04:24.377056 containerd[1565]: time="2025-06-21T05:04:24.376761956Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jun 21 05:04:24.377056 containerd[1565]: time="2025-06-21T05:04:24.376865089Z" level=info msg="metadata content store policy set" policy=shared Jun 21 05:04:24.383474 containerd[1565]: time="2025-06-21T05:04:24.383434694Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jun 21 05:04:24.383618 containerd[1565]: time="2025-06-21T05:04:24.383515666Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jun 21 05:04:24.383618 containerd[1565]: time="2025-06-21T05:04:24.383531806Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jun 21 05:04:24.383618 containerd[1565]: time="2025-06-21T05:04:24.383544670Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jun 21 05:04:24.383618 containerd[1565]: time="2025-06-21T05:04:24.383613409Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jun 21 05:04:24.383696 containerd[1565]: time="2025-06-21T05:04:24.383626835Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jun 21 05:04:24.383696 containerd[1565]: time="2025-06-21T05:04:24.383640230Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jun 21 05:04:24.383696 containerd[1565]: time="2025-06-21T05:04:24.383656190Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jun 21 05:04:24.383696 containerd[1565]: time="2025-06-21T05:04:24.383669174Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jun 21 05:04:24.383696 containerd[1565]: time="2025-06-21T05:04:24.383682158Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jun 21 05:04:24.383696 containerd[1565]: time="2025-06-21T05:04:24.383694732Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jun 21 05:04:24.383898 containerd[1565]: time="2025-06-21T05:04:24.383729637Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jun 21 05:04:24.383969 containerd[1565]: time="2025-06-21T05:04:24.383934682Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jun 21 05:04:24.383969 containerd[1565]: time="2025-06-21T05:04:24.383966321Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jun 21 05:04:24.384041 containerd[1565]: time="2025-06-21T05:04:24.383982802Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jun 21 05:04:24.384041 containerd[1565]: time="2025-06-21T05:04:24.384011797Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jun 21 05:04:24.384041 containerd[1565]: time="2025-06-21T05:04:24.384025322Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jun 21 05:04:24.384105 containerd[1565]: time="2025-06-21T05:04:24.384040130Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jun 21 05:04:24.384105 containerd[1565]: time="2025-06-21T05:04:24.384052563Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jun 21 05:04:24.384105 containerd[1565]: time="2025-06-21T05:04:24.384064355Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jun 21 05:04:24.384105 containerd[1565]: time="2025-06-21T05:04:24.384090174Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jun 21 05:04:24.384258 containerd[1565]: time="2025-06-21T05:04:24.384111694Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jun 21 05:04:24.384258 containerd[1565]: time="2025-06-21T05:04:24.384123757Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jun 21 05:04:24.384299 containerd[1565]: time="2025-06-21T05:04:24.384287394Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jun 21 05:04:24.384326 containerd[1565]: time="2025-06-21T05:04:24.384312230Z" level=info msg="Start snapshots syncer" Jun 21 05:04:24.384386 containerd[1565]: time="2025-06-21T05:04:24.384366663Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jun 21 05:04:24.384838 containerd[1565]: time="2025-06-21T05:04:24.384777614Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jun 21 05:04:24.385007 containerd[1565]: time="2025-06-21T05:04:24.384856542Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jun 21 05:04:24.385007 containerd[1565]: time="2025-06-21T05:04:24.384996725Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jun 21 05:04:24.385178 containerd[1565]: time="2025-06-21T05:04:24.385146916Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jun 21 05:04:24.385178 containerd[1565]: time="2025-06-21T05:04:24.385174047Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jun 21 05:04:24.385221 containerd[1565]: time="2025-06-21T05:04:24.385185268Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jun 21 05:04:24.385221 containerd[1565]: time="2025-06-21T05:04:24.385196039Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jun 21 05:04:24.385257 containerd[1565]: time="2025-06-21T05:04:24.385236384Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jun 21 05:04:24.385257 containerd[1565]: time="2025-06-21T05:04:24.385248046Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jun 21 05:04:24.385299 containerd[1565]: time="2025-06-21T05:04:24.385260249Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jun 21 05:04:24.385299 containerd[1565]: time="2025-06-21T05:04:24.385290856Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jun 21 05:04:24.385338 containerd[1565]: time="2025-06-21T05:04:24.385303991Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jun 21 05:04:24.385338 containerd[1565]: time="2025-06-21T05:04:24.385316094Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jun 21 05:04:24.407750 containerd[1565]: time="2025-06-21T05:04:24.407639153Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 21 05:04:24.407750 containerd[1565]: time="2025-06-21T05:04:24.407756954Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 21 05:04:24.407750 containerd[1565]: time="2025-06-21T05:04:24.407770390Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 21 05:04:24.408004 containerd[1565]: time="2025-06-21T05:04:24.407783103Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 21 05:04:24.408004 containerd[1565]: time="2025-06-21T05:04:24.407793773Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jun 21 05:04:24.408004 containerd[1565]: time="2025-06-21T05:04:24.407805816Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jun 21 05:04:24.408004 containerd[1565]: time="2025-06-21T05:04:24.407823840Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jun 21 05:04:24.408004 containerd[1565]: time="2025-06-21T05:04:24.407876378Z" level=info msg="runtime interface created" Jun 21 05:04:24.408004 containerd[1565]: time="2025-06-21T05:04:24.407883091Z" level=info msg="created NRI interface" Jun 21 05:04:24.408004 containerd[1565]: time="2025-06-21T05:04:24.407892920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jun 21 05:04:24.408004 containerd[1565]: time="2025-06-21T05:04:24.407932754Z" level=info msg="Connect containerd service" Jun 21 05:04:24.408004 containerd[1565]: time="2025-06-21T05:04:24.407993769Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 21 05:04:24.409185 containerd[1565]: time="2025-06-21T05:04:24.409158664Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 21 05:04:24.650264 containerd[1565]: time="2025-06-21T05:04:24.649743869Z" level=info msg="Start subscribing containerd event" Jun 21 05:04:24.650264 containerd[1565]: time="2025-06-21T05:04:24.649904741Z" level=info msg="Start recovering state" Jun 21 05:04:24.650264 containerd[1565]: time="2025-06-21T05:04:24.650173255Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 21 05:04:24.650426 containerd[1565]: time="2025-06-21T05:04:24.650178114Z" level=info msg="Start event monitor" Jun 21 05:04:24.650426 containerd[1565]: time="2025-06-21T05:04:24.650266800Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 21 05:04:24.655934 containerd[1565]: time="2025-06-21T05:04:24.655845496Z" level=info msg="Start cni network conf syncer for default" Jun 21 05:04:24.656023 containerd[1565]: time="2025-06-21T05:04:24.655948750Z" level=info msg="Start streaming server" Jun 21 05:04:24.656197 containerd[1565]: time="2025-06-21T05:04:24.656161549Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jun 21 05:04:24.656253 containerd[1565]: time="2025-06-21T05:04:24.656197988Z" level=info msg="runtime interface starting up..." Jun 21 05:04:24.656253 containerd[1565]: time="2025-06-21T05:04:24.656222163Z" level=info msg="starting plugins..." Jun 21 05:04:24.656253 containerd[1565]: time="2025-06-21T05:04:24.656251678Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jun 21 05:04:24.656580 systemd[1]: Started containerd.service - containerd container runtime. Jun 21 05:04:24.657264 containerd[1565]: time="2025-06-21T05:04:24.657229693Z" level=info msg="containerd successfully booted in 0.300939s" Jun 21 05:04:24.729774 tar[1563]: linux-amd64/README.md Jun 21 05:04:24.753199 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 21 05:04:25.376706 systemd-networkd[1461]: eth0: Gained IPv6LL Jun 21 05:04:25.380432 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 21 05:04:25.382253 systemd[1]: Reached target network-online.target - Network is Online. Jun 21 05:04:25.384883 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jun 21 05:04:25.387221 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 05:04:25.402796 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 21 05:04:25.428423 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 21 05:04:25.430078 systemd[1]: coreos-metadata.service: Deactivated successfully. Jun 21 05:04:25.430328 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jun 21 05:04:25.432798 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 21 05:04:26.070520 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 21 05:04:26.073198 systemd[1]: Started sshd@0-10.0.0.72:22-10.0.0.1:54704.service - OpenSSH per-connection server daemon (10.0.0.1:54704). Jun 21 05:04:26.201818 sshd[1665]: Accepted publickey for core from 10.0.0.1 port 54704 ssh2: RSA SHA256:UcUMoAuz6+rdewXVNINfGwLYEuDJpooqWrO3V6JQU60 Jun 21 05:04:26.205155 sshd-session[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:04:26.212324 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 21 05:04:26.214757 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 21 05:04:26.222565 systemd-logind[1550]: New session 1 of user core. Jun 21 05:04:26.253476 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 21 05:04:26.258256 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 21 05:04:26.278035 (systemd)[1669]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 21 05:04:26.280950 systemd-logind[1550]: New session c1 of user core. Jun 21 05:04:26.471556 systemd[1669]: Queued start job for default target default.target. Jun 21 05:04:26.488893 systemd[1669]: Created slice app.slice - User Application Slice. Jun 21 05:04:26.488921 systemd[1669]: Reached target paths.target - Paths. Jun 21 05:04:26.488966 systemd[1669]: Reached target timers.target - Timers. Jun 21 05:04:26.490667 systemd[1669]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 21 05:04:26.518726 systemd[1669]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 21 05:04:26.518870 systemd[1669]: Reached target sockets.target - Sockets. Jun 21 05:04:26.518911 systemd[1669]: Reached target basic.target - Basic System. Jun 21 05:04:26.518950 systemd[1669]: Reached target default.target - Main User Target. Jun 21 05:04:26.518982 systemd[1669]: Startup finished in 230ms. Jun 21 05:04:26.519866 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 21 05:04:26.534640 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 21 05:04:26.566788 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 05:04:26.568584 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 21 05:04:26.570038 systemd[1]: Startup finished in 3.505s (kernel) + 7.977s (initrd) + 5.772s (userspace) = 17.255s. Jun 21 05:04:26.583946 (kubelet)[1683]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 21 05:04:26.596728 systemd[1]: Started sshd@1-10.0.0.72:22-10.0.0.1:54708.service - OpenSSH per-connection server daemon (10.0.0.1:54708). Jun 21 05:04:26.653396 sshd[1687]: Accepted publickey for core from 10.0.0.1 port 54708 ssh2: RSA SHA256:UcUMoAuz6+rdewXVNINfGwLYEuDJpooqWrO3V6JQU60 Jun 21 05:04:26.655463 sshd-session[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:04:26.662410 systemd-logind[1550]: New session 2 of user core. Jun 21 05:04:26.676867 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 21 05:04:26.769222 sshd[1693]: Connection closed by 10.0.0.1 port 54708 Jun 21 05:04:26.769640 sshd-session[1687]: pam_unix(sshd:session): session closed for user core Jun 21 05:04:26.782090 systemd[1]: sshd@1-10.0.0.72:22-10.0.0.1:54708.service: Deactivated successfully. Jun 21 05:04:26.784381 systemd[1]: session-2.scope: Deactivated successfully. Jun 21 05:04:26.785188 systemd-logind[1550]: Session 2 logged out. Waiting for processes to exit. Jun 21 05:04:26.788808 systemd[1]: Started sshd@2-10.0.0.72:22-10.0.0.1:54716.service - OpenSSH per-connection server daemon (10.0.0.1:54716). Jun 21 05:04:26.790051 systemd-logind[1550]: Removed session 2. Jun 21 05:04:26.845628 sshd[1703]: Accepted publickey for core from 10.0.0.1 port 54716 ssh2: RSA SHA256:UcUMoAuz6+rdewXVNINfGwLYEuDJpooqWrO3V6JQU60 Jun 21 05:04:26.847347 sshd-session[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:04:26.852239 systemd-logind[1550]: New session 3 of user core. Jun 21 05:04:26.858617 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 21 05:04:26.944399 sshd[1705]: Connection closed by 10.0.0.1 port 54716 Jun 21 05:04:26.945585 sshd-session[1703]: pam_unix(sshd:session): session closed for user core Jun 21 05:04:26.957827 systemd[1]: sshd@2-10.0.0.72:22-10.0.0.1:54716.service: Deactivated successfully. Jun 21 05:04:26.959757 systemd[1]: session-3.scope: Deactivated successfully. Jun 21 05:04:26.960484 systemd-logind[1550]: Session 3 logged out. Waiting for processes to exit. Jun 21 05:04:26.963764 systemd[1]: Started sshd@3-10.0.0.72:22-10.0.0.1:54732.service - OpenSSH per-connection server daemon (10.0.0.1:54732). Jun 21 05:04:26.964374 systemd-logind[1550]: Removed session 3. Jun 21 05:04:27.018922 sshd[1711]: Accepted publickey for core from 10.0.0.1 port 54732 ssh2: RSA SHA256:UcUMoAuz6+rdewXVNINfGwLYEuDJpooqWrO3V6JQU60 Jun 21 05:04:27.021023 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:04:27.027186 systemd-logind[1550]: New session 4 of user core. Jun 21 05:04:27.039684 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 21 05:04:27.094180 sshd[1714]: Connection closed by 10.0.0.1 port 54732 Jun 21 05:04:27.094545 sshd-session[1711]: pam_unix(sshd:session): session closed for user core Jun 21 05:04:27.106031 systemd[1]: sshd@3-10.0.0.72:22-10.0.0.1:54732.service: Deactivated successfully. Jun 21 05:04:27.107952 systemd[1]: session-4.scope: Deactivated successfully. Jun 21 05:04:27.108632 systemd-logind[1550]: Session 4 logged out. Waiting for processes to exit. Jun 21 05:04:27.111307 systemd[1]: Started sshd@4-10.0.0.72:22-10.0.0.1:54738.service - OpenSSH per-connection server daemon (10.0.0.1:54738). Jun 21 05:04:27.111936 systemd-logind[1550]: Removed session 4. Jun 21 05:04:27.172119 sshd[1720]: Accepted publickey for core from 10.0.0.1 port 54738 ssh2: RSA SHA256:UcUMoAuz6+rdewXVNINfGwLYEuDJpooqWrO3V6JQU60 Jun 21 05:04:27.174054 sshd-session[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:04:27.178346 systemd-logind[1550]: New session 5 of user core. Jun 21 05:04:27.186697 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 21 05:04:27.202027 kubelet[1683]: E0621 05:04:27.201947 1683 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 21 05:04:27.206053 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 21 05:04:27.206291 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 21 05:04:27.206728 systemd[1]: kubelet.service: Consumed 1.612s CPU time, 265M memory peak. Jun 21 05:04:27.246908 sudo[1724]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 21 05:04:27.247228 sudo[1724]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 05:04:27.267742 sudo[1724]: pam_unix(sudo:session): session closed for user root Jun 21 05:04:27.269251 sshd[1722]: Connection closed by 10.0.0.1 port 54738 Jun 21 05:04:27.269594 sshd-session[1720]: pam_unix(sshd:session): session closed for user core Jun 21 05:04:27.283185 systemd[1]: sshd@4-10.0.0.72:22-10.0.0.1:54738.service: Deactivated successfully. Jun 21 05:04:27.285093 systemd[1]: session-5.scope: Deactivated successfully. Jun 21 05:04:27.285945 systemd-logind[1550]: Session 5 logged out. Waiting for processes to exit. Jun 21 05:04:27.288893 systemd[1]: Started sshd@5-10.0.0.72:22-10.0.0.1:54750.service - OpenSSH per-connection server daemon (10.0.0.1:54750). Jun 21 05:04:27.289604 systemd-logind[1550]: Removed session 5. Jun 21 05:04:27.341012 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 54750 ssh2: RSA SHA256:UcUMoAuz6+rdewXVNINfGwLYEuDJpooqWrO3V6JQU60 Jun 21 05:04:27.342560 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:04:27.347231 systemd-logind[1550]: New session 6 of user core. Jun 21 05:04:27.360646 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 21 05:04:27.505291 sudo[1734]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 21 05:04:27.505632 sudo[1734]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 05:04:27.513060 sudo[1734]: pam_unix(sudo:session): session closed for user root Jun 21 05:04:27.519983 sudo[1733]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jun 21 05:04:27.520289 sudo[1733]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 05:04:27.530541 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 21 05:04:27.580413 augenrules[1756]: No rules Jun 21 05:04:27.582296 systemd[1]: audit-rules.service: Deactivated successfully. Jun 21 05:04:27.582620 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 21 05:04:27.583925 sudo[1733]: pam_unix(sudo:session): session closed for user root Jun 21 05:04:27.585514 sshd[1732]: Connection closed by 10.0.0.1 port 54750 Jun 21 05:04:27.585890 sshd-session[1730]: pam_unix(sshd:session): session closed for user core Jun 21 05:04:27.602893 systemd[1]: sshd@5-10.0.0.72:22-10.0.0.1:54750.service: Deactivated successfully. Jun 21 05:04:27.604695 systemd[1]: session-6.scope: Deactivated successfully. Jun 21 05:04:27.605447 systemd-logind[1550]: Session 6 logged out. Waiting for processes to exit. Jun 21 05:04:27.608251 systemd[1]: Started sshd@6-10.0.0.72:22-10.0.0.1:54764.service - OpenSSH per-connection server daemon (10.0.0.1:54764). Jun 21 05:04:27.608993 systemd-logind[1550]: Removed session 6. Jun 21 05:04:27.661454 sshd[1765]: Accepted publickey for core from 10.0.0.1 port 54764 ssh2: RSA SHA256:UcUMoAuz6+rdewXVNINfGwLYEuDJpooqWrO3V6JQU60 Jun 21 05:04:27.662911 sshd-session[1765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:04:27.667238 systemd-logind[1550]: New session 7 of user core. Jun 21 05:04:27.677612 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 21 05:04:27.730331 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 21 05:04:27.730680 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 05:04:28.304183 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 21 05:04:28.323816 (dockerd)[1789]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 21 05:04:28.759092 dockerd[1789]: time="2025-06-21T05:04:28.759029001Z" level=info msg="Starting up" Jun 21 05:04:28.760027 dockerd[1789]: time="2025-06-21T05:04:28.759983101Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jun 21 05:04:29.451028 dockerd[1789]: time="2025-06-21T05:04:29.450954794Z" level=info msg="Loading containers: start." Jun 21 05:04:29.463526 kernel: Initializing XFRM netlink socket Jun 21 05:04:29.736627 systemd-networkd[1461]: docker0: Link UP Jun 21 05:04:29.742176 dockerd[1789]: time="2025-06-21T05:04:29.742132474Z" level=info msg="Loading containers: done." Jun 21 05:04:29.757137 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3577693015-merged.mount: Deactivated successfully. Jun 21 05:04:29.758655 dockerd[1789]: time="2025-06-21T05:04:29.758600759Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 21 05:04:29.758792 dockerd[1789]: time="2025-06-21T05:04:29.758685829Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jun 21 05:04:29.758828 dockerd[1789]: time="2025-06-21T05:04:29.758811986Z" level=info msg="Initializing buildkit" Jun 21 05:04:29.788624 dockerd[1789]: time="2025-06-21T05:04:29.788575785Z" level=info msg="Completed buildkit initialization" Jun 21 05:04:29.794814 dockerd[1789]: time="2025-06-21T05:04:29.794771598Z" level=info msg="Daemon has completed initialization" Jun 21 05:04:29.794871 dockerd[1789]: time="2025-06-21T05:04:29.794829727Z" level=info msg="API listen on /run/docker.sock" Jun 21 05:04:29.795007 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 21 05:04:31.003550 containerd[1565]: time="2025-06-21T05:04:31.003467348Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jun 21 05:04:31.580889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3430458175.mount: Deactivated successfully. Jun 21 05:04:32.865082 containerd[1565]: time="2025-06-21T05:04:32.864904066Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:04:32.865800 containerd[1565]: time="2025-06-21T05:04:32.865550589Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799045" Jun 21 05:04:32.866550 containerd[1565]: time="2025-06-21T05:04:32.866510169Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:04:32.869173 containerd[1565]: time="2025-06-21T05:04:32.869123692Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:04:32.870137 containerd[1565]: time="2025-06-21T05:04:32.870105554Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 1.866568967s" Jun 21 05:04:32.870190 containerd[1565]: time="2025-06-21T05:04:32.870141231Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jun 21 05:04:32.870873 containerd[1565]: time="2025-06-21T05:04:32.870833600Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jun 21 05:04:34.018171 containerd[1565]: time="2025-06-21T05:04:34.018100715Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:04:34.018830 containerd[1565]: time="2025-06-21T05:04:34.018762016Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783912" Jun 21 05:04:34.019940 containerd[1565]: time="2025-06-21T05:04:34.019898829Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:04:34.022502 containerd[1565]: time="2025-06-21T05:04:34.022452199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:04:34.023787 containerd[1565]: time="2025-06-21T05:04:34.023734244Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 1.152868775s" Jun 21 05:04:34.023787 containerd[1565]: time="2025-06-21T05:04:34.023784148Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jun 21 05:04:34.024435 containerd[1565]: time="2025-06-21T05:04:34.024403770Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jun 21 05:04:35.350779 containerd[1565]: time="2025-06-21T05:04:35.350710156Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:04:35.351752 containerd[1565]: time="2025-06-21T05:04:35.351396944Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176916" Jun 21 05:04:35.352648 containerd[1565]: time="2025-06-21T05:04:35.352619638Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:04:35.356690 containerd[1565]: time="2025-06-21T05:04:35.356641253Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:04:35.357499 containerd[1565]: time="2025-06-21T05:04:35.357447355Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 1.333009982s" Jun 21 05:04:35.357499 containerd[1565]: time="2025-06-21T05:04:35.357478884Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jun 21 05:04:35.357988 containerd[1565]: time="2025-06-21T05:04:35.357953314Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jun 21 05:04:36.301735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4045556295.mount: Deactivated successfully. Jun 21 05:04:36.948250 containerd[1565]: time="2025-06-21T05:04:36.948169774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:04:36.949278 containerd[1565]: time="2025-06-21T05:04:36.949210156Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895363" Jun 21 05:04:36.950701 containerd[1565]: time="2025-06-21T05:04:36.950659936Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:04:36.952993 containerd[1565]: time="2025-06-21T05:04:36.952947417Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:04:36.953801 containerd[1565]: time="2025-06-21T05:04:36.953748761Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 1.595762946s" Jun 21 05:04:36.953847 containerd[1565]: time="2025-06-21T05:04:36.953801329Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jun 21 05:04:36.954477 containerd[1565]: time="2025-06-21T05:04:36.954386928Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jun 21 05:04:37.433651 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 21 05:04:37.435284 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 05:04:37.481519 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount107493196.mount: Deactivated successfully. Jun 21 05:04:37.680423 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 05:04:37.694908 (kubelet)[2085]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 21 05:04:37.812915 kubelet[2085]: E0621 05:04:37.812839 2085 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 21 05:04:37.819544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 21 05:04:37.819741 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 21 05:04:37.820125 systemd[1]: kubelet.service: Consumed 290ms CPU time, 111.6M memory peak. Jun 21 05:04:38.463617 containerd[1565]: time="2025-06-21T05:04:38.463556748Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:04:38.464264 containerd[1565]: time="2025-06-21T05:04:38.464239038Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jun 21 05:04:38.465500 containerd[1565]: time="2025-06-21T05:04:38.465448367Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:04:38.467979 containerd[1565]: time="2025-06-21T05:04:38.467942576Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:04:38.468911 containerd[1565]: time="2025-06-21T05:04:38.468864746Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.514445137s" Jun 21 05:04:38.468911 containerd[1565]: time="2025-06-21T05:04:38.468911664Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jun 21 05:04:38.469670 containerd[1565]: time="2025-06-21T05:04:38.469622658Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jun 21 05:04:38.891535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2364403990.mount: Deactivated successfully. Jun 21 05:04:38.898545 containerd[1565]: time="2025-06-21T05:04:38.898468203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 21 05:04:38.899278 containerd[1565]: time="2025-06-21T05:04:38.899241624Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jun 21 05:04:38.900502 containerd[1565]: time="2025-06-21T05:04:38.900460040Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 21 05:04:38.902831 containerd[1565]: time="2025-06-21T05:04:38.902782958Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 21 05:04:38.903548 containerd[1565]: time="2025-06-21T05:04:38.903505523Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 433.822782ms" Jun 21 05:04:38.903548 containerd[1565]: time="2025-06-21T05:04:38.903541240Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jun 21 05:04:38.904110 containerd[1565]: time="2025-06-21T05:04:38.904070383Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jun 21 05:04:39.408611 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1133747821.mount: Deactivated successfully. Jun 21 05:04:41.232011 containerd[1565]: time="2025-06-21T05:04:41.231930234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:04:41.232756 containerd[1565]: time="2025-06-21T05:04:41.232729885Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Jun 21 05:04:41.234006 containerd[1565]: time="2025-06-21T05:04:41.233955644Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:04:41.237405 containerd[1565]: time="2025-06-21T05:04:41.237364279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:04:41.238372 containerd[1565]: time="2025-06-21T05:04:41.238328759Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.334223049s" Jun 21 05:04:41.238372 containerd[1565]: time="2025-06-21T05:04:41.238353816Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jun 21 05:04:43.847866 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 05:04:43.848048 systemd[1]: kubelet.service: Consumed 290ms CPU time, 111.6M memory peak. Jun 21 05:04:43.850410 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 05:04:43.875947 systemd[1]: Reload requested from client PID 2231 ('systemctl') (unit session-7.scope)... Jun 21 05:04:43.875967 systemd[1]: Reloading... Jun 21 05:04:43.961529 zram_generator::config[2277]: No configuration found. Jun 21 05:04:44.103479 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 05:04:44.220160 systemd[1]: Reloading finished in 343 ms. Jun 21 05:04:44.286252 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 21 05:04:44.286365 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 21 05:04:44.286669 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 05:04:44.286710 systemd[1]: kubelet.service: Consumed 159ms CPU time, 98.3M memory peak. Jun 21 05:04:44.288598 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 05:04:44.479061 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 05:04:44.482844 (kubelet)[2322]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 21 05:04:44.659865 kubelet[2322]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 05:04:44.659865 kubelet[2322]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 21 05:04:44.659865 kubelet[2322]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 05:04:44.660208 kubelet[2322]: I0621 05:04:44.659979 2322 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 21 05:04:45.147331 kubelet[2322]: I0621 05:04:45.147280 2322 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jun 21 05:04:45.147331 kubelet[2322]: I0621 05:04:45.147310 2322 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 21 05:04:45.147620 kubelet[2322]: I0621 05:04:45.147595 2322 server.go:954] "Client rotation is on, will bootstrap in background" Jun 21 05:04:45.171674 kubelet[2322]: E0621 05:04:45.171638 2322 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.72:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" Jun 21 05:04:45.172161 kubelet[2322]: I0621 05:04:45.172131 2322 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 21 05:04:45.179856 kubelet[2322]: I0621 05:04:45.179833 2322 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 21 05:04:45.184838 kubelet[2322]: I0621 05:04:45.184796 2322 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 21 05:04:45.186572 kubelet[2322]: I0621 05:04:45.186527 2322 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 21 05:04:45.186760 kubelet[2322]: I0621 05:04:45.186563 2322 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 21 05:04:45.186884 kubelet[2322]: I0621 05:04:45.186766 2322 topology_manager.go:138] "Creating topology manager with none policy" Jun 21 05:04:45.186884 kubelet[2322]: I0621 05:04:45.186774 2322 container_manager_linux.go:304] "Creating device plugin manager" Jun 21 05:04:45.186951 kubelet[2322]: I0621 05:04:45.186934 2322 state_mem.go:36] "Initialized new in-memory state store" Jun 21 05:04:45.189454 kubelet[2322]: I0621 05:04:45.189427 2322 kubelet.go:446] "Attempting to sync node with API server" Jun 21 05:04:45.189505 kubelet[2322]: I0621 05:04:45.189460 2322 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 21 05:04:45.189505 kubelet[2322]: I0621 05:04:45.189501 2322 kubelet.go:352] "Adding apiserver pod source" Jun 21 05:04:45.189559 kubelet[2322]: I0621 05:04:45.189517 2322 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 21 05:04:45.190937 kubelet[2322]: W0621 05:04:45.190898 2322 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.72:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Jun 21 05:04:45.191000 kubelet[2322]: E0621 05:04:45.190949 2322 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.72:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" Jun 21 05:04:45.191025 kubelet[2322]: W0621 05:04:45.190982 2322 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.72:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Jun 21 05:04:45.191047 kubelet[2322]: E0621 05:04:45.191024 2322 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.72:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" Jun 21 05:04:45.192289 kubelet[2322]: I0621 05:04:45.192258 2322 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 21 05:04:45.192668 kubelet[2322]: I0621 05:04:45.192651 2322 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 21 05:04:45.193634 kubelet[2322]: W0621 05:04:45.193610 2322 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 21 05:04:45.195846 kubelet[2322]: I0621 05:04:45.195817 2322 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 21 05:04:45.195894 kubelet[2322]: I0621 05:04:45.195856 2322 server.go:1287] "Started kubelet" Jun 21 05:04:45.197910 kubelet[2322]: I0621 05:04:45.197749 2322 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 21 05:04:45.199682 kubelet[2322]: I0621 05:04:45.198465 2322 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 21 05:04:45.199682 kubelet[2322]: I0621 05:04:45.198860 2322 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 21 05:04:45.199682 kubelet[2322]: I0621 05:04:45.198941 2322 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jun 21 05:04:45.199682 kubelet[2322]: I0621 05:04:45.199635 2322 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 21 05:04:45.200440 kubelet[2322]: I0621 05:04:45.200412 2322 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 21 05:04:45.200805 kubelet[2322]: E0621 05:04:45.200785 2322 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 21 05:04:45.201430 kubelet[2322]: I0621 05:04:45.201411 2322 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 21 05:04:45.201941 kubelet[2322]: I0621 05:04:45.201557 2322 reconciler.go:26] "Reconciler: start to sync state" Jun 21 05:04:45.202349 kubelet[2322]: W0621 05:04:45.202304 2322 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.72:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Jun 21 05:04:45.202409 kubelet[2322]: E0621 05:04:45.202351 2322 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.72:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" Jun 21 05:04:45.202441 kubelet[2322]: E0621 05:04:45.202413 2322 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.72:6443: connect: connection refused" interval="200ms" Jun 21 05:04:45.205548 kubelet[2322]: E0621 05:04:45.203331 2322 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.72:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.72:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184af65d2ab63dc2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-06-21 05:04:45.195836866 +0000 UTC m=+0.706114762,LastTimestamp:2025-06-21 05:04:45.195836866 +0000 UTC m=+0.706114762,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jun 21 05:04:45.205663 kubelet[2322]: I0621 05:04:45.205618 2322 factory.go:221] Registration of the systemd container factory successfully Jun 21 05:04:45.205663 kubelet[2322]: I0621 05:04:45.205639 2322 server.go:479] "Adding debug handlers to kubelet server" Jun 21 05:04:45.205890 kubelet[2322]: I0621 05:04:45.205849 2322 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 21 05:04:45.206291 kubelet[2322]: E0621 05:04:45.206270 2322 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 21 05:04:45.207760 kubelet[2322]: I0621 05:04:45.207739 2322 factory.go:221] Registration of the containerd container factory successfully Jun 21 05:04:45.221139 kubelet[2322]: I0621 05:04:45.221024 2322 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 21 05:04:45.222350 kubelet[2322]: I0621 05:04:45.222336 2322 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 21 05:04:45.222350 kubelet[2322]: I0621 05:04:45.222347 2322 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 21 05:04:45.222420 kubelet[2322]: I0621 05:04:45.222365 2322 state_mem.go:36] "Initialized new in-memory state store" Jun 21 05:04:45.222614 kubelet[2322]: I0621 05:04:45.222590 2322 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 21 05:04:45.222755 kubelet[2322]: I0621 05:04:45.222704 2322 status_manager.go:227] "Starting to sync pod status with apiserver" Jun 21 05:04:45.223312 kubelet[2322]: I0621 05:04:45.222785 2322 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 21 05:04:45.223312 kubelet[2322]: I0621 05:04:45.222801 2322 kubelet.go:2382] "Starting kubelet main sync loop" Jun 21 05:04:45.223312 kubelet[2322]: E0621 05:04:45.222857 2322 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 21 05:04:45.223532 kubelet[2322]: W0621 05:04:45.223507 2322 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.72:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Jun 21 05:04:45.223643 kubelet[2322]: E0621 05:04:45.223614 2322 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.72:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" Jun 21 05:04:45.301855 kubelet[2322]: E0621 05:04:45.301796 2322 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 21 05:04:45.323087 kubelet[2322]: E0621 05:04:45.323049 2322 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 21 05:04:45.402472 kubelet[2322]: E0621 05:04:45.402345 2322 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 21 05:04:45.403873 kubelet[2322]: E0621 05:04:45.403840 2322 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.72:6443: connect: connection refused" interval="400ms" Jun 21 05:04:45.503020 kubelet[2322]: E0621 05:04:45.502979 2322 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 21 05:04:45.523188 kubelet[2322]: E0621 05:04:45.523160 2322 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 21 05:04:45.603665 kubelet[2322]: E0621 05:04:45.603613 2322 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 21 05:04:45.703779 kubelet[2322]: E0621 05:04:45.703739 2322 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 21 05:04:45.704660 kubelet[2322]: I0621 05:04:45.704632 2322 policy_none.go:49] "None policy: Start" Jun 21 05:04:45.704660 kubelet[2322]: I0621 05:04:45.704659 2322 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 21 05:04:45.704731 kubelet[2322]: I0621 05:04:45.704672 2322 state_mem.go:35] "Initializing new in-memory state store" Jun 21 05:04:45.710523 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 21 05:04:45.721857 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 21 05:04:45.725059 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 21 05:04:45.735755 kubelet[2322]: I0621 05:04:45.735475 2322 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 21 05:04:45.735846 kubelet[2322]: I0621 05:04:45.735788 2322 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 21 05:04:45.735846 kubelet[2322]: I0621 05:04:45.735807 2322 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 21 05:04:45.736141 kubelet[2322]: I0621 05:04:45.736071 2322 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 21 05:04:45.737098 kubelet[2322]: E0621 05:04:45.737069 2322 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 21 05:04:45.737239 kubelet[2322]: E0621 05:04:45.737111 2322 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jun 21 05:04:45.805128 kubelet[2322]: E0621 05:04:45.805069 2322 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.72:6443: connect: connection refused" interval="800ms" Jun 21 05:04:45.837421 kubelet[2322]: I0621 05:04:45.837384 2322 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jun 21 05:04:45.837838 kubelet[2322]: E0621 05:04:45.837798 2322 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.72:6443/api/v1/nodes\": dial tcp 10.0.0.72:6443: connect: connection refused" node="localhost" Jun 21 05:04:45.932318 systemd[1]: Created slice kubepods-burstable-podd44a6e607abd70bd6ee39de5ca4de8c8.slice - libcontainer container kubepods-burstable-podd44a6e607abd70bd6ee39de5ca4de8c8.slice. Jun 21 05:04:45.960360 kubelet[2322]: E0621 05:04:45.960258 2322 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jun 21 05:04:45.963853 systemd[1]: Created slice kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice - libcontainer container kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice. Jun 21 05:04:45.977698 kubelet[2322]: E0621 05:04:45.977659 2322 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jun 21 05:04:45.980765 systemd[1]: Created slice kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice - libcontainer container kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice. Jun 21 05:04:45.982498 kubelet[2322]: E0621 05:04:45.982460 2322 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jun 21 05:04:46.005938 kubelet[2322]: I0621 05:04:46.005914 2322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d44a6e607abd70bd6ee39de5ca4de8c8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d44a6e607abd70bd6ee39de5ca4de8c8\") " pod="kube-system/kube-apiserver-localhost" Jun 21 05:04:46.005992 kubelet[2322]: I0621 05:04:46.005944 2322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d44a6e607abd70bd6ee39de5ca4de8c8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d44a6e607abd70bd6ee39de5ca4de8c8\") " pod="kube-system/kube-apiserver-localhost" Jun 21 05:04:46.005992 kubelet[2322]: I0621 05:04:46.005967 2322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 21 05:04:46.005992 kubelet[2322]: I0621 05:04:46.005985 2322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 21 05:04:46.006066 kubelet[2322]: I0621 05:04:46.006003 2322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d44a6e607abd70bd6ee39de5ca4de8c8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d44a6e607abd70bd6ee39de5ca4de8c8\") " pod="kube-system/kube-apiserver-localhost" Jun 21 05:04:46.006066 kubelet[2322]: I0621 05:04:46.006020 2322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 21 05:04:46.006066 kubelet[2322]: I0621 05:04:46.006037 2322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 21 05:04:46.006066 kubelet[2322]: I0621 05:04:46.006054 2322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 21 05:04:46.006144 kubelet[2322]: I0621 05:04:46.006116 2322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jun 21 05:04:46.039059 kubelet[2322]: I0621 05:04:46.039033 2322 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jun 21 05:04:46.039420 kubelet[2322]: E0621 05:04:46.039380 2322 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.72:6443/api/v1/nodes\": dial tcp 10.0.0.72:6443: connect: connection refused" node="localhost" Jun 21 05:04:46.100095 kubelet[2322]: W0621 05:04:46.100043 2322 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.72:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Jun 21 05:04:46.100150 kubelet[2322]: E0621 05:04:46.100101 2322 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.72:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" Jun 21 05:04:46.247424 kubelet[2322]: W0621 05:04:46.247288 2322 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.72:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Jun 21 05:04:46.247424 kubelet[2322]: E0621 05:04:46.247323 2322 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.72:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" Jun 21 05:04:46.260706 kubelet[2322]: E0621 05:04:46.260680 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 05:04:46.261352 containerd[1565]: time="2025-06-21T05:04:46.261310631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d44a6e607abd70bd6ee39de5ca4de8c8,Namespace:kube-system,Attempt:0,}" Jun 21 05:04:46.279046 kubelet[2322]: E0621 05:04:46.278556 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 05:04:46.279325 containerd[1565]: time="2025-06-21T05:04:46.279211144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,}" Jun 21 05:04:46.285964 kubelet[2322]: E0621 05:04:46.285717 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 05:04:46.286317 containerd[1565]: time="2025-06-21T05:04:46.286290515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,}" Jun 21 05:04:46.289041 containerd[1565]: time="2025-06-21T05:04:46.289005779Z" level=info msg="connecting to shim fff91cff39f847bdd1a60f50f9bdd68a956ecae691705bc7895b74e9613fc7a2" address="unix:///run/containerd/s/c7bd2293ea38516f12bb23b99f7a92b7a44d732282aacd376b0f554c19f18cfc" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:04:46.307514 containerd[1565]: time="2025-06-21T05:04:46.307156732Z" level=info msg="connecting to shim a1066084aab0b80440ba9d635658e7feb5433677ee47fcffa5f882eaa84fcff8" address="unix:///run/containerd/s/12a4b71e9ff3deb38a64b85ec3af0709fc9d768547dd2c6e1b81fe17b0d83f92" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:04:46.325220 systemd[1]: Started cri-containerd-fff91cff39f847bdd1a60f50f9bdd68a956ecae691705bc7895b74e9613fc7a2.scope - libcontainer container fff91cff39f847bdd1a60f50f9bdd68a956ecae691705bc7895b74e9613fc7a2. Jun 21 05:04:46.329473 containerd[1565]: time="2025-06-21T05:04:46.328530260Z" level=info msg="connecting to shim fef2d9605accb7b8b74ba135dadbf4b4b4b01efee98597d7758f76c6701e4ab4" address="unix:///run/containerd/s/823188ee1a2884e525acbcf73c84bc5cb8b886257919965204ad61afb773c5b1" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:04:46.417630 systemd[1]: Started cri-containerd-a1066084aab0b80440ba9d635658e7feb5433677ee47fcffa5f882eaa84fcff8.scope - libcontainer container a1066084aab0b80440ba9d635658e7feb5433677ee47fcffa5f882eaa84fcff8. Jun 21 05:04:46.423283 systemd[1]: Started cri-containerd-fef2d9605accb7b8b74ba135dadbf4b4b4b01efee98597d7758f76c6701e4ab4.scope - libcontainer container fef2d9605accb7b8b74ba135dadbf4b4b4b01efee98597d7758f76c6701e4ab4. Jun 21 05:04:46.441717 kubelet[2322]: I0621 05:04:46.441674 2322 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jun 21 05:04:46.442258 kubelet[2322]: E0621 05:04:46.442205 2322 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.72:6443/api/v1/nodes\": dial tcp 10.0.0.72:6443: connect: connection refused" node="localhost" Jun 21 05:04:46.462521 containerd[1565]: time="2025-06-21T05:04:46.462460412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d44a6e607abd70bd6ee39de5ca4de8c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"fff91cff39f847bdd1a60f50f9bdd68a956ecae691705bc7895b74e9613fc7a2\"" Jun 21 05:04:46.464968 kubelet[2322]: E0621 05:04:46.464943 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 05:04:46.469040 containerd[1565]: time="2025-06-21T05:04:46.468992848Z" level=info msg="CreateContainer within sandbox \"fff91cff39f847bdd1a60f50f9bdd68a956ecae691705bc7895b74e9613fc7a2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 21 05:04:46.482931 containerd[1565]: time="2025-06-21T05:04:46.482867628Z" level=info msg="Container 9e4199feb8ec78f38ab83d9a132ce14ce49f20d210e42e24c0f8520667971593: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:04:46.483175 containerd[1565]: time="2025-06-21T05:04:46.483115854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1066084aab0b80440ba9d635658e7feb5433677ee47fcffa5f882eaa84fcff8\"" Jun 21 05:04:46.484034 kubelet[2322]: E0621 05:04:46.484001 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 05:04:46.485196 containerd[1565]: time="2025-06-21T05:04:46.485157774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"fef2d9605accb7b8b74ba135dadbf4b4b4b01efee98597d7758f76c6701e4ab4\"" Jun 21 05:04:46.486010 kubelet[2322]: E0621 05:04:46.485978 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 05:04:46.486742 containerd[1565]: time="2025-06-21T05:04:46.486707171Z" level=info msg="CreateContainer within sandbox \"a1066084aab0b80440ba9d635658e7feb5433677ee47fcffa5f882eaa84fcff8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 21 05:04:46.491878 containerd[1565]: time="2025-06-21T05:04:46.491441283Z" level=info msg="CreateContainer within sandbox \"fef2d9605accb7b8b74ba135dadbf4b4b4b01efee98597d7758f76c6701e4ab4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 21 05:04:46.492038 containerd[1565]: time="2025-06-21T05:04:46.492004329Z" level=info msg="CreateContainer within sandbox \"fff91cff39f847bdd1a60f50f9bdd68a956ecae691705bc7895b74e9613fc7a2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9e4199feb8ec78f38ab83d9a132ce14ce49f20d210e42e24c0f8520667971593\"" Jun 21 05:04:46.492603 containerd[1565]: time="2025-06-21T05:04:46.492575701Z" level=info msg="StartContainer for \"9e4199feb8ec78f38ab83d9a132ce14ce49f20d210e42e24c0f8520667971593\"" Jun 21 05:04:46.494918 containerd[1565]: time="2025-06-21T05:04:46.494841091Z" level=info msg="connecting to shim 9e4199feb8ec78f38ab83d9a132ce14ce49f20d210e42e24c0f8520667971593" address="unix:///run/containerd/s/c7bd2293ea38516f12bb23b99f7a92b7a44d732282aacd376b0f554c19f18cfc" protocol=ttrpc version=3 Jun 21 05:04:46.500250 containerd[1565]: time="2025-06-21T05:04:46.500162003Z" level=info msg="Container 154e273fd3e565ce754136aa7c7a93b1ad3c4e67f469e0b024e2c4791968367b: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:04:46.504150 containerd[1565]: time="2025-06-21T05:04:46.503703678Z" level=info msg="Container 76f479d3342e468269f69ebbeea78aa42df253fd9325510d693f843590096717: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:04:46.509765 containerd[1565]: time="2025-06-21T05:04:46.509721127Z" level=info msg="CreateContainer within sandbox \"a1066084aab0b80440ba9d635658e7feb5433677ee47fcffa5f882eaa84fcff8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"154e273fd3e565ce754136aa7c7a93b1ad3c4e67f469e0b024e2c4791968367b\"" Jun 21 05:04:46.510279 containerd[1565]: time="2025-06-21T05:04:46.510223309Z" level=info msg="StartContainer for \"154e273fd3e565ce754136aa7c7a93b1ad3c4e67f469e0b024e2c4791968367b\"" Jun 21 05:04:46.511585 containerd[1565]: time="2025-06-21T05:04:46.511551220Z" level=info msg="connecting to shim 154e273fd3e565ce754136aa7c7a93b1ad3c4e67f469e0b024e2c4791968367b" address="unix:///run/containerd/s/12a4b71e9ff3deb38a64b85ec3af0709fc9d768547dd2c6e1b81fe17b0d83f92" protocol=ttrpc version=3 Jun 21 05:04:46.512428 containerd[1565]: time="2025-06-21T05:04:46.512391216Z" level=info msg="CreateContainer within sandbox \"fef2d9605accb7b8b74ba135dadbf4b4b4b01efee98597d7758f76c6701e4ab4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"76f479d3342e468269f69ebbeea78aa42df253fd9325510d693f843590096717\"" Jun 21 05:04:46.513028 containerd[1565]: time="2025-06-21T05:04:46.512987545Z" level=info msg="StartContainer for \"76f479d3342e468269f69ebbeea78aa42df253fd9325510d693f843590096717\"" Jun 21 05:04:46.514039 containerd[1565]: time="2025-06-21T05:04:46.514004793Z" level=info msg="connecting to shim 76f479d3342e468269f69ebbeea78aa42df253fd9325510d693f843590096717" address="unix:///run/containerd/s/823188ee1a2884e525acbcf73c84bc5cb8b886257919965204ad61afb773c5b1" protocol=ttrpc version=3 Jun 21 05:04:46.526648 systemd[1]: Started cri-containerd-9e4199feb8ec78f38ab83d9a132ce14ce49f20d210e42e24c0f8520667971593.scope - libcontainer container 9e4199feb8ec78f38ab83d9a132ce14ce49f20d210e42e24c0f8520667971593. Jun 21 05:04:46.534016 systemd[1]: Started cri-containerd-154e273fd3e565ce754136aa7c7a93b1ad3c4e67f469e0b024e2c4791968367b.scope - libcontainer container 154e273fd3e565ce754136aa7c7a93b1ad3c4e67f469e0b024e2c4791968367b. Jun 21 05:04:46.535601 systemd[1]: Started cri-containerd-76f479d3342e468269f69ebbeea78aa42df253fd9325510d693f843590096717.scope - libcontainer container 76f479d3342e468269f69ebbeea78aa42df253fd9325510d693f843590096717. Jun 21 05:04:46.569762 kubelet[2322]: W0621 05:04:46.569624 2322 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.72:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Jun 21 05:04:46.569762 kubelet[2322]: E0621 05:04:46.569713 2322 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.72:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" Jun 21 05:04:46.603044 containerd[1565]: time="2025-06-21T05:04:46.603009432Z" level=info msg="StartContainer for \"76f479d3342e468269f69ebbeea78aa42df253fd9325510d693f843590096717\" returns successfully" Jun 21 05:04:46.603741 containerd[1565]: time="2025-06-21T05:04:46.603502548Z" level=info msg="StartContainer for \"154e273fd3e565ce754136aa7c7a93b1ad3c4e67f469e0b024e2c4791968367b\" returns successfully" Jun 21 05:04:46.606534 kubelet[2322]: E0621 05:04:46.606498 2322 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.72:6443: connect: connection refused" interval="1.6s" Jun 21 05:04:46.647197 kubelet[2322]: W0621 05:04:46.647135 2322 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.72:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Jun 21 05:04:46.647294 kubelet[2322]: E0621 05:04:46.647227 2322 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.72:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" Jun 21 05:04:46.683190 containerd[1565]: time="2025-06-21T05:04:46.683117591Z" level=info msg="StartContainer for \"9e4199feb8ec78f38ab83d9a132ce14ce49f20d210e42e24c0f8520667971593\" returns successfully" Jun 21 05:04:47.233629 kubelet[2322]: E0621 05:04:47.233587 2322 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jun 21 05:04:47.234129 kubelet[2322]: E0621 05:04:47.233707 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 05:04:47.234129 kubelet[2322]: E0621 05:04:47.234052 2322 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jun 21 05:04:47.234129 kubelet[2322]: E0621 05:04:47.234126 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 05:04:47.236560 kubelet[2322]: E0621 05:04:47.236537 2322 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jun 21 05:04:47.236687 kubelet[2322]: E0621 05:04:47.236668 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 05:04:47.243391 kubelet[2322]: I0621 05:04:47.243366 2322 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jun 21 05:04:48.012160 kubelet[2322]: I0621 05:04:48.011909 2322 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jun 21 05:04:48.012368 kubelet[2322]: E0621 05:04:48.012181 2322 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jun 21 05:04:48.038195 kubelet[2322]: E0621 05:04:48.038124 2322 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 21 05:04:48.138536 kubelet[2322]: E0621 05:04:48.138473 2322 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 21 05:04:48.238509 kubelet[2322]: E0621 05:04:48.238448 2322 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jun 21 05:04:48.238985 kubelet[2322]: E0621 05:04:48.238552 2322 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jun 21 05:04:48.238985 kubelet[2322]: E0621 05:04:48.238612 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 05:04:48.238985 kubelet[2322]: E0621 05:04:48.238639 2322 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 21 05:04:48.238985 kubelet[2322]: E0621 05:04:48.238660 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 05:04:48.339359 kubelet[2322]: E0621 05:04:48.339186 2322 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 21 05:04:48.439987 kubelet[2322]: E0621 05:04:48.439900 2322 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 21 05:04:48.540603 kubelet[2322]: E0621 05:04:48.540546 2322 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 21 05:04:48.641376 kubelet[2322]: E0621 05:04:48.641230 2322 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 21 05:04:48.741627 kubelet[2322]: E0621 05:04:48.741568 2322 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 21 05:04:48.842275 kubelet[2322]: E0621 05:04:48.842228 2322 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 21 05:04:48.943073 kubelet[2322]: E0621 05:04:48.943030 2322 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 21 05:04:49.043581 kubelet[2322]: E0621 05:04:49.043534 2322 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 21 05:04:49.144504 kubelet[2322]: E0621 05:04:49.144428 2322 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 21 05:04:49.245071 kubelet[2322]: E0621 05:04:49.244947 2322 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 21 05:04:49.346109 kubelet[2322]: E0621 05:04:49.346053 2322 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 21 05:04:49.502426 kubelet[2322]: I0621 05:04:49.502301 2322 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jun 21 05:04:49.512032 kubelet[2322]: I0621 05:04:49.511999 2322 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jun 21 05:04:49.516864 kubelet[2322]: I0621 05:04:49.516832 2322 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jun 21 05:04:49.925245 systemd[1]: Reload requested from client PID 2593 ('systemctl') (unit session-7.scope)... Jun 21 05:04:49.925265 systemd[1]: Reloading... Jun 21 05:04:50.030716 zram_generator::config[2642]: No configuration found. Jun 21 05:04:50.193472 kubelet[2322]: I0621 05:04:50.193417 2322 apiserver.go:52] "Watching apiserver" Jun 21 05:04:50.195948 kubelet[2322]: E0621 05:04:50.195914 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 05:04:50.196238 kubelet[2322]: E0621 05:04:50.196162 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 05:04:50.196624 kubelet[2322]: E0621 05:04:50.196585 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 05:04:50.202763 kubelet[2322]: I0621 05:04:50.202731 2322 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 21 05:04:50.512102 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 05:04:50.652853 systemd[1]: Reloading finished in 727 ms. Jun 21 05:04:50.683840 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 05:04:50.705158 systemd[1]: kubelet.service: Deactivated successfully. Jun 21 05:04:50.705455 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 05:04:50.705529 systemd[1]: kubelet.service: Consumed 1.204s CPU time, 134.8M memory peak. Jun 21 05:04:50.708663 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 05:04:50.941388 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 05:04:50.951923 (kubelet)[2681]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 21 05:04:51.197346 kubelet[2681]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 05:04:51.197346 kubelet[2681]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 21 05:04:51.197346 kubelet[2681]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 05:04:51.197867 kubelet[2681]: I0621 05:04:51.197391 2681 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 21 05:04:51.204768 kubelet[2681]: I0621 05:04:51.204692 2681 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jun 21 05:04:51.204768 kubelet[2681]: I0621 05:04:51.204715 2681 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 21 05:04:51.205023 kubelet[2681]: I0621 05:04:51.204997 2681 server.go:954] "Client rotation is on, will bootstrap in background" Jun 21 05:04:51.206210 kubelet[2681]: I0621 05:04:51.206177 2681 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 21 05:04:51.212197 kubelet[2681]: I0621 05:04:51.212103 2681 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 21 05:04:51.217499 kubelet[2681]: I0621 05:04:51.217461 2681 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 21 05:04:51.224145 kubelet[2681]: I0621 05:04:51.224104 2681 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 21 05:04:51.224365 kubelet[2681]: I0621 05:04:51.224333 2681 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 21 05:04:51.224561 kubelet[2681]: I0621 05:04:51.224364 2681 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 21 05:04:51.224651 kubelet[2681]: I0621 05:04:51.224571 2681 topology_manager.go:138] "Creating topology manager with none policy" Jun 21 05:04:51.224651 kubelet[2681]: I0621 05:04:51.224580 2681 container_manager_linux.go:304] "Creating device plugin manager" Jun 21 05:04:51.224651 kubelet[2681]: I0621 05:04:51.224627 2681 state_mem.go:36] "Initialized new in-memory state store" Jun 21 05:04:51.224790 kubelet[2681]: I0621 05:04:51.224773 2681 kubelet.go:446] "Attempting to sync node with API server" Jun 21 05:04:51.224817 kubelet[2681]: I0621 05:04:51.224801 2681 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 21 05:04:51.224843 kubelet[2681]: I0621 05:04:51.224823 2681 kubelet.go:352] "Adding apiserver pod source" Jun 21 05:04:51.224843 kubelet[2681]: I0621 05:04:51.224834 2681 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 21 05:04:51.225796 kubelet[2681]: I0621 05:04:51.225622 2681 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 21 05:04:51.226121 kubelet[2681]: I0621 05:04:51.226093 2681 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 21 05:04:51.228140 kubelet[2681]: I0621 05:04:51.226707 2681 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 21 05:04:51.228140 kubelet[2681]: I0621 05:04:51.226777 2681 server.go:1287] "Started kubelet" Jun 21 05:04:51.228140 kubelet[2681]: I0621 05:04:51.227169 2681 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jun 21 05:04:51.228228 kubelet[2681]: I0621 05:04:51.228183 2681 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 21 05:04:51.228391 kubelet[2681]: I0621 05:04:51.228363 2681 server.go:479] "Adding debug handlers to kubelet server" Jun 21 05:04:51.231886 kubelet[2681]: I0621 05:04:51.231689 2681 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 21 05:04:51.232896 kubelet[2681]: I0621 05:04:51.232044 2681 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 21 05:04:51.233819 kubelet[2681]: I0621 05:04:51.233780 2681 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 21 05:04:51.235392 kubelet[2681]: I0621 05:04:51.235367 2681 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 21 05:04:51.236126 kubelet[2681]: E0621 05:04:51.235582 2681 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 21 05:04:51.238873 kubelet[2681]: I0621 05:04:51.238131 2681 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 21 05:04:51.238873 kubelet[2681]: I0621 05:04:51.238324 2681 reconciler.go:26] "Reconciler: start to sync state" Jun 21 05:04:51.240380 kubelet[2681]: I0621 05:04:51.240359 2681 factory.go:221] Registration of the systemd container factory successfully Jun 21 05:04:51.240550 kubelet[2681]: I0621 05:04:51.240532 2681 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 21 05:04:51.241518 kubelet[2681]: I0621 05:04:51.241463 2681 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 21 05:04:51.242726 kubelet[2681]: I0621 05:04:51.242711 2681 factory.go:221] Registration of the containerd container factory successfully Jun 21 05:04:51.243476 kubelet[2681]: I0621 05:04:51.243183 2681 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 21 05:04:51.243476 kubelet[2681]: I0621 05:04:51.243216 2681 status_manager.go:227] "Starting to sync pod status with apiserver" Jun 21 05:04:51.243476 kubelet[2681]: I0621 05:04:51.243237 2681 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 21 05:04:51.243476 kubelet[2681]: I0621 05:04:51.243244 2681 kubelet.go:2382] "Starting kubelet main sync loop" Jun 21 05:04:51.243476 kubelet[2681]: E0621 05:04:51.243290 2681 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 21 05:04:51.244081 kubelet[2681]: E0621 05:04:51.244040 2681 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 21 05:04:51.289539 kubelet[2681]: I0621 05:04:51.289481 2681 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 21 05:04:51.289539 kubelet[2681]: I0621 05:04:51.289524 2681 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 21 05:04:51.289539 kubelet[2681]: I0621 05:04:51.289544 2681 state_mem.go:36] "Initialized new in-memory state store" Jun 21 05:04:51.289751 kubelet[2681]: I0621 05:04:51.289734 2681 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 21 05:04:51.289775 kubelet[2681]: I0621 05:04:51.289749 2681 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 21 05:04:51.289796 kubelet[2681]: I0621 05:04:51.289777 2681 policy_none.go:49] "None policy: Start" Jun 21 05:04:51.289796 kubelet[2681]: I0621 05:04:51.289786 2681 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 21 05:04:51.289796 kubelet[2681]: I0621 05:04:51.289796 2681 state_mem.go:35] "Initializing new in-memory state store" Jun 21 05:04:51.289915 kubelet[2681]: I0621 05:04:51.289901 2681 state_mem.go:75] "Updated machine memory state" Jun 21 05:04:51.294153 kubelet[2681]: I0621 05:04:51.294123 2681 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 21 05:04:51.294404 kubelet[2681]: I0621 05:04:51.294305 2681 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 21 05:04:51.294404 kubelet[2681]: I0621 05:04:51.294322 2681 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 21 05:04:51.295312 kubelet[2681]: I0621 05:04:51.294651 2681 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 21 05:04:51.299186 kubelet[2681]: E0621 05:04:51.298974 2681 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 21 05:04:51.344793 kubelet[2681]: I0621 05:04:51.344751 2681 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jun 21 05:04:51.345310 kubelet[2681]: I0621 05:04:51.345274 2681 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jun 21 05:04:51.346361 kubelet[2681]: I0621 05:04:51.346232 2681 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jun 21 05:04:51.349788 kubelet[2681]: E0621 05:04:51.349731 2681 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jun 21 05:04:51.352661 kubelet[2681]: E0621 05:04:51.352553 2681 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jun 21 05:04:51.353136 kubelet[2681]: E0621 05:04:51.353101 2681 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jun 21 05:04:51.399973 kubelet[2681]: I0621 05:04:51.399931 2681 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jun 21 05:04:51.406474 kubelet[2681]: I0621 05:04:51.406441 2681 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jun 21 05:04:51.406581 kubelet[2681]: I0621 05:04:51.406548 2681 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jun 21 05:04:51.439390 kubelet[2681]: I0621 05:04:51.439348 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 21 05:04:51.439390 kubelet[2681]: I0621 05:04:51.439387 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jun 21 05:04:51.439390 kubelet[2681]: I0621 05:04:51.439406 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d44a6e607abd70bd6ee39de5ca4de8c8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d44a6e607abd70bd6ee39de5ca4de8c8\") " pod="kube-system/kube-apiserver-localhost" Jun 21 05:04:51.439390 kubelet[2681]: I0621 05:04:51.439421 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d44a6e607abd70bd6ee39de5ca4de8c8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d44a6e607abd70bd6ee39de5ca4de8c8\") " pod="kube-system/kube-apiserver-localhost" Jun 21 05:04:51.439731 kubelet[2681]: I0621 05:04:51.439437 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 21 05:04:51.439731 kubelet[2681]: I0621 05:04:51.439453 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 21 05:04:51.439731 kubelet[2681]: I0621 05:04:51.439466 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d44a6e607abd70bd6ee39de5ca4de8c8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d44a6e607abd70bd6ee39de5ca4de8c8\") " pod="kube-system/kube-apiserver-localhost" Jun 21 05:04:51.441135 kubelet[2681]: I0621 05:04:51.439480 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 21 05:04:51.441135 kubelet[2681]: I0621 05:04:51.441125 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 21 05:04:51.650822 kubelet[2681]: E0621 05:04:51.650689 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 05:04:51.653742 kubelet[2681]: E0621 05:04:51.653718 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 05:04:51.654197 kubelet[2681]: E0621 05:04:51.653801 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 05:04:52.226627 kubelet[2681]: I0621 05:04:52.226563 2681 apiserver.go:52] "Watching apiserver" Jun 21 05:04:52.239177 kubelet[2681]: I0621 05:04:52.239144 2681 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 21 05:04:52.262661 kubelet[2681]: E0621 05:04:52.262468 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 05:04:52.262661 kubelet[2681]: E0621 05:04:52.262509 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 05:04:52.262661 kubelet[2681]: I0621 05:04:52.262614 2681 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jun 21 05:04:52.271709 kubelet[2681]: E0621 05:04:52.271636 2681 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jun 21 05:04:52.272606 kubelet[2681]: E0621 05:04:52.271892 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 05:04:52.309447 kubelet[2681]: I0621 05:04:52.309370 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.309344404 podStartE2EDuration="3.309344404s" podCreationTimestamp="2025-06-21 05:04:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 05:04:52.308529165 +0000 UTC m=+1.352090770" watchObservedRunningTime="2025-06-21 05:04:52.309344404 +0000 UTC m=+1.352906009" Jun 21 05:04:52.323029 kubelet[2681]: I0621 05:04:52.322955 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.322936133 podStartE2EDuration="3.322936133s" podCreationTimestamp="2025-06-21 05:04:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 05:04:52.315108057 +0000 UTC m=+1.358669672" watchObservedRunningTime="2025-06-21 05:04:52.322936133 +0000 UTC m=+1.366497738" Jun 21 05:04:52.330862 kubelet[2681]: I0621 05:04:52.330806 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.330789046 podStartE2EDuration="3.330789046s" podCreationTimestamp="2025-06-21 05:04:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 05:04:52.322871592 +0000 UTC m=+1.366433197" watchObservedRunningTime="2025-06-21 05:04:52.330789046 +0000 UTC m=+1.374350641" Jun 21 05:04:53.263608 kubelet[2681]: E0621 05:04:53.263560 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 05:04:53.264001 kubelet[2681]: E0621 05:04:53.263823 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 05:04:53.629377 kubelet[2681]: E0621 05:04:53.629228 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 05:04:54.690798 kubelet[2681]: I0621 05:04:54.690763 2681 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 21 05:04:54.691287 kubelet[2681]: I0621 05:04:54.691231 2681 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 21 05:04:54.691386 containerd[1565]: time="2025-06-21T05:04:54.691073598Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 21 05:04:55.058633 kubelet[2681]: E0621 05:04:55.058525 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 05:04:55.734368 systemd[1]: Created slice kubepods-besteffort-pod9b087c7d_11b6_4ce0_aa63_e4cbfaf671c0.slice - libcontainer container kubepods-besteffort-pod9b087c7d_11b6_4ce0_aa63_e4cbfaf671c0.slice. Jun 21 05:04:55.768446 kubelet[2681]: I0621 05:04:55.768389 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b087c7d-11b6-4ce0-aa63-e4cbfaf671c0-xtables-lock\") pod \"kube-proxy-pdk5t\" (UID: \"9b087c7d-11b6-4ce0-aa63-e4cbfaf671c0\") " pod="kube-system/kube-proxy-pdk5t" Jun 21 05:04:55.768446 kubelet[2681]: I0621 05:04:55.768437 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9b087c7d-11b6-4ce0-aa63-e4cbfaf671c0-kube-proxy\") pod \"kube-proxy-pdk5t\" (UID: \"9b087c7d-11b6-4ce0-aa63-e4cbfaf671c0\") " pod="kube-system/kube-proxy-pdk5t" Jun 21 05:04:55.768919 kubelet[2681]: I0621 05:04:55.768460 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b087c7d-11b6-4ce0-aa63-e4cbfaf671c0-lib-modules\") pod \"kube-proxy-pdk5t\" (UID: \"9b087c7d-11b6-4ce0-aa63-e4cbfaf671c0\") " pod="kube-system/kube-proxy-pdk5t" Jun 21 05:04:55.768919 kubelet[2681]: I0621 05:04:55.768481 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgkh4\" (UniqueName: \"kubernetes.io/projected/9b087c7d-11b6-4ce0-aa63-e4cbfaf671c0-kube-api-access-dgkh4\") pod \"kube-proxy-pdk5t\" (UID: \"9b087c7d-11b6-4ce0-aa63-e4cbfaf671c0\") " pod="kube-system/kube-proxy-pdk5t" Jun 21 05:04:55.849650 systemd[1]: Created slice kubepods-besteffort-pod164a31a5_e261_44d1_9a67_2488acc045a9.slice - libcontainer container kubepods-besteffort-pod164a31a5_e261_44d1_9a67_2488acc045a9.slice. Jun 21 05:04:55.869469 kubelet[2681]: I0621 05:04:55.869373 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/164a31a5-e261-44d1-9a67-2488acc045a9-var-lib-calico\") pod \"tigera-operator-68f7c7984d-wlh2r\" (UID: \"164a31a5-e261-44d1-9a67-2488acc045a9\") " pod="tigera-operator/tigera-operator-68f7c7984d-wlh2r" Jun 21 05:04:55.869469 kubelet[2681]: I0621 05:04:55.869459 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvmk5\" (UniqueName: \"kubernetes.io/projected/164a31a5-e261-44d1-9a67-2488acc045a9-kube-api-access-kvmk5\") pod \"tigera-operator-68f7c7984d-wlh2r\" (UID: \"164a31a5-e261-44d1-9a67-2488acc045a9\") " pod="tigera-operator/tigera-operator-68f7c7984d-wlh2r" Jun 21 05:04:56.043122 kubelet[2681]: E0621 05:04:56.043027 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 05:04:56.043876 containerd[1565]: time="2025-06-21T05:04:56.043818778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pdk5t,Uid:9b087c7d-11b6-4ce0-aa63-e4cbfaf671c0,Namespace:kube-system,Attempt:0,}" Jun 21 05:04:56.069276 containerd[1565]: time="2025-06-21T05:04:56.069229097Z" level=info msg="connecting to shim abf76f0489e8e70f4d663b5ca77ed658c44e2dddfd73724a3c6cf51f477d7443" address="unix:///run/containerd/s/55c333a92ff203120c0d119d07033b097db27c7adbef6a51f0d49f150923b560" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:04:56.096622 systemd[1]: Started cri-containerd-abf76f0489e8e70f4d663b5ca77ed658c44e2dddfd73724a3c6cf51f477d7443.scope - libcontainer container abf76f0489e8e70f4d663b5ca77ed658c44e2dddfd73724a3c6cf51f477d7443. Jun 21 05:04:56.122841 containerd[1565]: time="2025-06-21T05:04:56.122777003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pdk5t,Uid:9b087c7d-11b6-4ce0-aa63-e4cbfaf671c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"abf76f0489e8e70f4d663b5ca77ed658c44e2dddfd73724a3c6cf51f477d7443\"" Jun 21 05:04:56.123384 kubelet[2681]: E0621 05:04:56.123360 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 05:04:56.125796 containerd[1565]: time="2025-06-21T05:04:56.125721229Z" level=info msg="CreateContainer within sandbox \"abf76f0489e8e70f4d663b5ca77ed658c44e2dddfd73724a3c6cf51f477d7443\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 21 05:04:56.138639 containerd[1565]: time="2025-06-21T05:04:56.138574749Z" level=info msg="Container c27cb2a0b15386f4c70338e23ff54eb0daf0a70b8d77281f518066c51dd32b71: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:04:56.147135 containerd[1565]: time="2025-06-21T05:04:56.147093701Z" level=info msg="CreateContainer within sandbox \"abf76f0489e8e70f4d663b5ca77ed658c44e2dddfd73724a3c6cf51f477d7443\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c27cb2a0b15386f4c70338e23ff54eb0daf0a70b8d77281f518066c51dd32b71\"" Jun 21 05:04:56.147753 containerd[1565]: time="2025-06-21T05:04:56.147690503Z" level=info msg="StartContainer for \"c27cb2a0b15386f4c70338e23ff54eb0daf0a70b8d77281f518066c51dd32b71\"" Jun 21 05:04:56.149325 containerd[1565]: time="2025-06-21T05:04:56.149295923Z" level=info msg="connecting to shim c27cb2a0b15386f4c70338e23ff54eb0daf0a70b8d77281f518066c51dd32b71" address="unix:///run/containerd/s/55c333a92ff203120c0d119d07033b097db27c7adbef6a51f0d49f150923b560" protocol=ttrpc version=3 Jun 21 05:04:56.153282 containerd[1565]: time="2025-06-21T05:04:56.153250520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-68f7c7984d-wlh2r,Uid:164a31a5-e261-44d1-9a67-2488acc045a9,Namespace:tigera-operator,Attempt:0,}" Jun 21 05:04:56.171690 systemd[1]: Started cri-containerd-c27cb2a0b15386f4c70338e23ff54eb0daf0a70b8d77281f518066c51dd32b71.scope - libcontainer container c27cb2a0b15386f4c70338e23ff54eb0daf0a70b8d77281f518066c51dd32b71. Jun 21 05:04:56.174508 containerd[1565]: time="2025-06-21T05:04:56.174458103Z" level=info msg="connecting to shim 5c243220d22400059a2bd132cfcc29c06193db4d503600900fce959d99b2a86a" address="unix:///run/containerd/s/f6d6a5e038f7632bd459e5167c7735f2d4ab8e963ce8e51f0c05c4e238b1604b" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:04:56.210628 systemd[1]: Started cri-containerd-5c243220d22400059a2bd132cfcc29c06193db4d503600900fce959d99b2a86a.scope - libcontainer container 5c243220d22400059a2bd132cfcc29c06193db4d503600900fce959d99b2a86a. Jun 21 05:04:56.235905 containerd[1565]: time="2025-06-21T05:04:56.235840237Z" level=info msg="StartContainer for \"c27cb2a0b15386f4c70338e23ff54eb0daf0a70b8d77281f518066c51dd32b71\" returns successfully" Jun 21 05:04:56.259604 containerd[1565]: time="2025-06-21T05:04:56.259554772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-68f7c7984d-wlh2r,Uid:164a31a5-e261-44d1-9a67-2488acc045a9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"5c243220d22400059a2bd132cfcc29c06193db4d503600900fce959d99b2a86a\"" Jun 21 05:04:56.262434 containerd[1565]: time="2025-06-21T05:04:56.262404044Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.1\"" Jun 21 05:04:56.270932 kubelet[2681]: E0621 05:04:56.270901 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 05:04:57.467774 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3812183421.mount: Deactivated successfully. Jun 21 05:04:58.031066 containerd[1565]: time="2025-06-21T05:04:58.031010372Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:04:58.031857 containerd[1565]: time="2025-06-21T05:04:58.031825221Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.1: active requests=0, bytes read=25059858" Jun 21 05:04:58.033101 containerd[1565]: time="2025-06-21T05:04:58.033050268Z" level=info msg="ImageCreate event name:\"sha256:9fe1a04a0e6c440395d63018f1a72bb1ed07d81ed81be41e9b8adcc35a64164c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:04:58.035136 containerd[1565]: time="2025-06-21T05:04:58.035085936Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a2a468d1ac1b6a7049c1c2505cd933461fcadb127b5c3f98f03bd8e402bce456\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:04:58.036038 containerd[1565]: time="2025-06-21T05:04:58.035991158Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.1\" with image id \"sha256:9fe1a04a0e6c440395d63018f1a72bb1ed07d81ed81be41e9b8adcc35a64164c\", repo tag \"quay.io/tigera/operator:v1.38.1\", repo digest \"quay.io/tigera/operator@sha256:a2a468d1ac1b6a7049c1c2505cd933461fcadb127b5c3f98f03bd8e402bce456\", size \"25055853\" in 1.773522989s" Jun 21 05:04:58.036082 containerd[1565]: time="2025-06-21T05:04:58.036035894Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.1\" returns image reference \"sha256:9fe1a04a0e6c440395d63018f1a72bb1ed07d81ed81be41e9b8adcc35a64164c\"" Jun 21 05:04:58.038267 containerd[1565]: time="2025-06-21T05:04:58.038233913Z" level=info msg="CreateContainer within sandbox \"5c243220d22400059a2bd132cfcc29c06193db4d503600900fce959d99b2a86a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 21 05:04:58.044608 containerd[1565]: time="2025-06-21T05:04:58.044564266Z" level=info msg="Container 81410e078e5b56083e54cd943efbe8a501f8ab3ad53a3bd997df5cc0fa4995ac: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:04:58.051058 containerd[1565]: time="2025-06-21T05:04:58.050966817Z" level=info msg="CreateContainer within sandbox \"5c243220d22400059a2bd132cfcc29c06193db4d503600900fce959d99b2a86a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"81410e078e5b56083e54cd943efbe8a501f8ab3ad53a3bd997df5cc0fa4995ac\"" Jun 21 05:04:58.054263 containerd[1565]: time="2025-06-21T05:04:58.053038864Z" level=info msg="StartContainer for \"81410e078e5b56083e54cd943efbe8a501f8ab3ad53a3bd997df5cc0fa4995ac\"" Jun 21 05:04:58.054263 containerd[1565]: time="2025-06-21T05:04:58.053994383Z" level=info msg="connecting to shim 81410e078e5b56083e54cd943efbe8a501f8ab3ad53a3bd997df5cc0fa4995ac" address="unix:///run/containerd/s/f6d6a5e038f7632bd459e5167c7735f2d4ab8e963ce8e51f0c05c4e238b1604b" protocol=ttrpc version=3 Jun 21 05:04:58.112623 systemd[1]: Started cri-containerd-81410e078e5b56083e54cd943efbe8a501f8ab3ad53a3bd997df5cc0fa4995ac.scope - libcontainer container 81410e078e5b56083e54cd943efbe8a501f8ab3ad53a3bd997df5cc0fa4995ac. Jun 21 05:04:58.145210 containerd[1565]: time="2025-06-21T05:04:58.145165334Z" level=info msg="StartContainer for \"81410e078e5b56083e54cd943efbe8a501f8ab3ad53a3bd997df5cc0fa4995ac\" returns successfully" Jun 21 05:04:58.284140 kubelet[2681]: I0621 05:04:58.283847 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pdk5t" podStartSLOduration=3.2838187420000002 podStartE2EDuration="3.283818742s" podCreationTimestamp="2025-06-21 05:04:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 05:04:56.280342976 +0000 UTC m=+5.323904591" watchObservedRunningTime="2025-06-21 05:04:58.283818742 +0000 UTC m=+7.327380348" Jun 21 05:05:00.194275 kubelet[2681]: E0621 05:05:00.194228 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 05:05:00.220510 kubelet[2681]: I0621 05:05:00.219648 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-68f7c7984d-wlh2r" podStartSLOduration=3.444326717 podStartE2EDuration="5.219625159s" podCreationTimestamp="2025-06-21 05:04:55 +0000 UTC" firstStartedPulling="2025-06-21 05:04:56.261557089 +0000 UTC m=+5.305118694" lastFinishedPulling="2025-06-21 05:04:58.036855531 +0000 UTC m=+7.080417136" observedRunningTime="2025-06-21 05:04:58.284158978 +0000 UTC m=+7.327720583" watchObservedRunningTime="2025-06-21 05:05:00.219625159 +0000 UTC m=+9.263186764" Jun 21 05:05:00.279348 kubelet[2681]: E0621 05:05:00.279310 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 05:05:01.280645 kubelet[2681]: E0621 05:05:01.280600 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 05:05:03.360262 sudo[1768]: pam_unix(sudo:session): session closed for user root Jun 21 05:05:03.362287 sshd[1767]: Connection closed by 10.0.0.1 port 54764 Jun 21 05:05:03.364540 sshd-session[1765]: pam_unix(sshd:session): session closed for user core Jun 21 05:05:03.368865 systemd[1]: sshd@6-10.0.0.72:22-10.0.0.1:54764.service: Deactivated successfully. Jun 21 05:05:03.373050 systemd[1]: session-7.scope: Deactivated successfully. Jun 21 05:05:03.373312 systemd[1]: session-7.scope: Consumed 5.355s CPU time, 220.7M memory peak. Jun 21 05:05:03.376618 systemd-logind[1550]: Session 7 logged out. Waiting for processes to exit. Jun 21 05:05:03.378279 systemd-logind[1550]: Removed session 7. Jun 21 05:05:03.633723 kubelet[2681]: E0621 05:05:03.633589 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 05:05:05.068889 kubelet[2681]: E0621 05:05:05.067423 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 05:05:05.288004 kubelet[2681]: E0621 05:05:05.287954 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 05:05:05.694638 kubelet[2681]: W0621 05:05:05.694514 2681 reflector.go:569] object-"calico-system"/"tigera-ca-bundle": failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Jun 21 05:05:05.694638 kubelet[2681]: E0621 05:05:05.694598 2681 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"tigera-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"tigera-ca-bundle\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jun 21 05:05:05.707937 systemd[1]: Created slice kubepods-besteffort-pode17996fa_e340_40b2_8d8e_8f426f11f155.slice - libcontainer container kubepods-besteffort-pode17996fa_e340_40b2_8d8e_8f426f11f155.slice. Jun 21 05:05:05.737916 kubelet[2681]: I0621 05:05:05.737851 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvgnk\" (UniqueName: \"kubernetes.io/projected/e17996fa-e340-40b2-8d8e-8f426f11f155-kube-api-access-pvgnk\") pod \"calico-typha-54bcb67b96-r9vzr\" (UID: \"e17996fa-e340-40b2-8d8e-8f426f11f155\") " pod="calico-system/calico-typha-54bcb67b96-r9vzr" Jun 21 05:05:05.737916 kubelet[2681]: I0621 05:05:05.737901 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e17996fa-e340-40b2-8d8e-8f426f11f155-tigera-ca-bundle\") pod \"calico-typha-54bcb67b96-r9vzr\" (UID: \"e17996fa-e340-40b2-8d8e-8f426f11f155\") " pod="calico-system/calico-typha-54bcb67b96-r9vzr" Jun 21 05:05:05.737916 kubelet[2681]: I0621 05:05:05.737916 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e17996fa-e340-40b2-8d8e-8f426f11f155-typha-certs\") pod \"calico-typha-54bcb67b96-r9vzr\" (UID: \"e17996fa-e340-40b2-8d8e-8f426f11f155\") " pod="calico-system/calico-typha-54bcb67b96-r9vzr" Jun 21 05:05:06.096939 systemd[1]: Created slice kubepods-besteffort-podb434b871_006f_4585_bdab_335b26ecc539.slice - libcontainer container kubepods-besteffort-podb434b871_006f_4585_bdab_335b26ecc539.slice. Jun 21 05:05:06.140740 kubelet[2681]: I0621 05:05:06.140673 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b434b871-006f-4585-bdab-335b26ecc539-cni-log-dir\") pod \"calico-node-qr2vh\" (UID: \"b434b871-006f-4585-bdab-335b26ecc539\") " pod="calico-system/calico-node-qr2vh" Jun 21 05:05:06.140740 kubelet[2681]: I0621 05:05:06.140720 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b434b871-006f-4585-bdab-335b26ecc539-cni-net-dir\") pod \"calico-node-qr2vh\" (UID: \"b434b871-006f-4585-bdab-335b26ecc539\") " pod="calico-system/calico-node-qr2vh" Jun 21 05:05:06.140740 kubelet[2681]: I0621 05:05:06.140739 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b434b871-006f-4585-bdab-335b26ecc539-xtables-lock\") pod \"calico-node-qr2vh\" (UID: \"b434b871-006f-4585-bdab-335b26ecc539\") " pod="calico-system/calico-node-qr2vh" Jun 21 05:05:06.141254 kubelet[2681]: I0621 05:05:06.140759 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znvjg\" (UniqueName: \"kubernetes.io/projected/b434b871-006f-4585-bdab-335b26ecc539-kube-api-access-znvjg\") pod \"calico-node-qr2vh\" (UID: \"b434b871-006f-4585-bdab-335b26ecc539\") " pod="calico-system/calico-node-qr2vh" Jun 21 05:05:06.141254 kubelet[2681]: I0621 05:05:06.140848 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b434b871-006f-4585-bdab-335b26ecc539-cni-bin-dir\") pod \"calico-node-qr2vh\" (UID: \"b434b871-006f-4585-bdab-335b26ecc539\") " pod="calico-system/calico-node-qr2vh" Jun 21 05:05:06.141254 kubelet[2681]: I0621 05:05:06.140893 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b434b871-006f-4585-bdab-335b26ecc539-policysync\") pod \"calico-node-qr2vh\" (UID: \"b434b871-006f-4585-bdab-335b26ecc539\") " pod="calico-system/calico-node-qr2vh" Jun 21 05:05:06.141254 kubelet[2681]: I0621 05:05:06.140923 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b434b871-006f-4585-bdab-335b26ecc539-var-lib-calico\") pod \"calico-node-qr2vh\" (UID: \"b434b871-006f-4585-bdab-335b26ecc539\") " pod="calico-system/calico-node-qr2vh" Jun 21 05:05:06.141254 kubelet[2681]: I0621 05:05:06.140959 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b434b871-006f-4585-bdab-335b26ecc539-node-certs\") pod \"calico-node-qr2vh\" (UID: \"b434b871-006f-4585-bdab-335b26ecc539\") " pod="calico-system/calico-node-qr2vh" Jun 21 05:05:06.141436 kubelet[2681]: I0621 05:05:06.140980 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b434b871-006f-4585-bdab-335b26ecc539-tigera-ca-bundle\") pod \"calico-node-qr2vh\" (UID: \"b434b871-006f-4585-bdab-335b26ecc539\") " pod="calico-system/calico-node-qr2vh" Jun 21 05:05:06.141436 kubelet[2681]: I0621 05:05:06.140997 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b434b871-006f-4585-bdab-335b26ecc539-var-run-calico\") pod \"calico-node-qr2vh\" (UID: \"b434b871-006f-4585-bdab-335b26ecc539\") " pod="calico-system/calico-node-qr2vh" Jun 21 05:05:06.141436 kubelet[2681]: I0621 05:05:06.141020 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b434b871-006f-4585-bdab-335b26ecc539-flexvol-driver-host\") pod \"calico-node-qr2vh\" (UID: \"b434b871-006f-4585-bdab-335b26ecc539\") " pod="calico-system/calico-node-qr2vh" Jun 21 05:05:06.141436 kubelet[2681]: I0621 05:05:06.141081 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b434b871-006f-4585-bdab-335b26ecc539-lib-modules\") pod \"calico-node-qr2vh\" (UID: \"b434b871-006f-4585-bdab-335b26ecc539\") " pod="calico-system/calico-node-qr2vh" Jun 21 05:05:06.249095 kubelet[2681]: E0621 05:05:06.248933 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.249095 kubelet[2681]: W0621 05:05:06.248960 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.249095 kubelet[2681]: E0621 05:05:06.249012 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.250299 kubelet[2681]: E0621 05:05:06.250129 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.250299 kubelet[2681]: W0621 05:05:06.250147 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.250299 kubelet[2681]: E0621 05:05:06.250157 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.334781 kubelet[2681]: E0621 05:05:06.334721 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gnpdd" podUID="3eee9492-ff67-4a0d-a49e-690cbd0112e0" Jun 21 05:05:06.425128 kubelet[2681]: E0621 05:05:06.424530 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.425128 kubelet[2681]: W0621 05:05:06.424565 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.425128 kubelet[2681]: E0621 05:05:06.424595 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.425128 kubelet[2681]: E0621 05:05:06.424839 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.425128 kubelet[2681]: W0621 05:05:06.424862 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.425128 kubelet[2681]: E0621 05:05:06.424890 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.425399 kubelet[2681]: E0621 05:05:06.425339 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.425399 kubelet[2681]: W0621 05:05:06.425351 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.425399 kubelet[2681]: E0621 05:05:06.425364 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.426200 kubelet[2681]: E0621 05:05:06.426171 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.426200 kubelet[2681]: W0621 05:05:06.426194 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.426295 kubelet[2681]: E0621 05:05:06.426219 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.426526 kubelet[2681]: E0621 05:05:06.426504 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.426526 kubelet[2681]: W0621 05:05:06.426517 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.426526 kubelet[2681]: E0621 05:05:06.426529 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.426746 kubelet[2681]: E0621 05:05:06.426727 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.426746 kubelet[2681]: W0621 05:05:06.426739 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.426746 kubelet[2681]: E0621 05:05:06.426748 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.426926 kubelet[2681]: E0621 05:05:06.426910 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.426926 kubelet[2681]: W0621 05:05:06.426920 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.427018 kubelet[2681]: E0621 05:05:06.426928 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.427148 kubelet[2681]: E0621 05:05:06.427106 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.427148 kubelet[2681]: W0621 05:05:06.427118 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.427148 kubelet[2681]: E0621 05:05:06.427127 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.427312 kubelet[2681]: E0621 05:05:06.427294 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.427350 kubelet[2681]: W0621 05:05:06.427315 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.427350 kubelet[2681]: E0621 05:05:06.427324 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.427547 kubelet[2681]: E0621 05:05:06.427525 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.427547 kubelet[2681]: W0621 05:05:06.427540 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.427652 kubelet[2681]: E0621 05:05:06.427552 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.427746 kubelet[2681]: E0621 05:05:06.427730 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.427746 kubelet[2681]: W0621 05:05:06.427742 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.427795 kubelet[2681]: E0621 05:05:06.427750 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.428049 kubelet[2681]: E0621 05:05:06.428029 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.428049 kubelet[2681]: W0621 05:05:06.428042 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.428049 kubelet[2681]: E0621 05:05:06.428051 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.428341 kubelet[2681]: E0621 05:05:06.428323 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.428341 kubelet[2681]: W0621 05:05:06.428339 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.428417 kubelet[2681]: E0621 05:05:06.428356 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.428596 kubelet[2681]: E0621 05:05:06.428570 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.428596 kubelet[2681]: W0621 05:05:06.428583 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.428596 kubelet[2681]: E0621 05:05:06.428596 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.428884 kubelet[2681]: E0621 05:05:06.428867 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.428884 kubelet[2681]: W0621 05:05:06.428882 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.428949 kubelet[2681]: E0621 05:05:06.428893 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.429140 kubelet[2681]: E0621 05:05:06.429114 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.429140 kubelet[2681]: W0621 05:05:06.429130 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.429212 kubelet[2681]: E0621 05:05:06.429142 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.429398 kubelet[2681]: E0621 05:05:06.429375 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.429398 kubelet[2681]: W0621 05:05:06.429393 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.429468 kubelet[2681]: E0621 05:05:06.429404 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.429621 kubelet[2681]: E0621 05:05:06.429601 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.429621 kubelet[2681]: W0621 05:05:06.429615 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.429717 kubelet[2681]: E0621 05:05:06.429625 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.429851 kubelet[2681]: E0621 05:05:06.429835 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.429851 kubelet[2681]: W0621 05:05:06.429846 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.429935 kubelet[2681]: E0621 05:05:06.429857 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.430244 kubelet[2681]: E0621 05:05:06.430131 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.430244 kubelet[2681]: W0621 05:05:06.430142 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.430244 kubelet[2681]: E0621 05:05:06.430153 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.443689 kubelet[2681]: E0621 05:05:06.443667 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.443689 kubelet[2681]: W0621 05:05:06.443683 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.443782 kubelet[2681]: E0621 05:05:06.443694 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.443782 kubelet[2681]: I0621 05:05:06.443723 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3eee9492-ff67-4a0d-a49e-690cbd0112e0-varrun\") pod \"csi-node-driver-gnpdd\" (UID: \"3eee9492-ff67-4a0d-a49e-690cbd0112e0\") " pod="calico-system/csi-node-driver-gnpdd" Jun 21 05:05:06.443957 kubelet[2681]: E0621 05:05:06.443933 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.444008 kubelet[2681]: W0621 05:05:06.443955 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.444008 kubelet[2681]: E0621 05:05:06.443972 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.444081 kubelet[2681]: I0621 05:05:06.444005 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8jhq\" (UniqueName: \"kubernetes.io/projected/3eee9492-ff67-4a0d-a49e-690cbd0112e0-kube-api-access-p8jhq\") pod \"csi-node-driver-gnpdd\" (UID: \"3eee9492-ff67-4a0d-a49e-690cbd0112e0\") " pod="calico-system/csi-node-driver-gnpdd" Jun 21 05:05:06.444295 kubelet[2681]: E0621 05:05:06.444257 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.444295 kubelet[2681]: W0621 05:05:06.444281 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.444369 kubelet[2681]: E0621 05:05:06.444304 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.444369 kubelet[2681]: I0621 05:05:06.444340 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3eee9492-ff67-4a0d-a49e-690cbd0112e0-kubelet-dir\") pod \"csi-node-driver-gnpdd\" (UID: \"3eee9492-ff67-4a0d-a49e-690cbd0112e0\") " pod="calico-system/csi-node-driver-gnpdd" Jun 21 05:05:06.444841 kubelet[2681]: E0621 05:05:06.444590 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.444841 kubelet[2681]: W0621 05:05:06.444612 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.444841 kubelet[2681]: E0621 05:05:06.444637 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.444841 kubelet[2681]: I0621 05:05:06.444659 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3eee9492-ff67-4a0d-a49e-690cbd0112e0-registration-dir\") pod \"csi-node-driver-gnpdd\" (UID: \"3eee9492-ff67-4a0d-a49e-690cbd0112e0\") " pod="calico-system/csi-node-driver-gnpdd" Jun 21 05:05:06.444951 kubelet[2681]: E0621 05:05:06.444899 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.444951 kubelet[2681]: W0621 05:05:06.444912 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.444951 kubelet[2681]: E0621 05:05:06.444931 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.445019 kubelet[2681]: I0621 05:05:06.444949 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3eee9492-ff67-4a0d-a49e-690cbd0112e0-socket-dir\") pod \"csi-node-driver-gnpdd\" (UID: \"3eee9492-ff67-4a0d-a49e-690cbd0112e0\") " pod="calico-system/csi-node-driver-gnpdd" Jun 21 05:05:06.445217 kubelet[2681]: E0621 05:05:06.445198 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.445217 kubelet[2681]: W0621 05:05:06.445214 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.445281 kubelet[2681]: E0621 05:05:06.445232 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.445453 kubelet[2681]: E0621 05:05:06.445429 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.445453 kubelet[2681]: W0621 05:05:06.445443 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.445530 kubelet[2681]: E0621 05:05:06.445475 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.445661 kubelet[2681]: E0621 05:05:06.445637 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.445661 kubelet[2681]: W0621 05:05:06.445652 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.445709 kubelet[2681]: E0621 05:05:06.445679 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.445832 kubelet[2681]: E0621 05:05:06.445816 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.445832 kubelet[2681]: W0621 05:05:06.445829 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.445876 kubelet[2681]: E0621 05:05:06.445855 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.446026 kubelet[2681]: E0621 05:05:06.446009 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.446026 kubelet[2681]: W0621 05:05:06.446023 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.446093 kubelet[2681]: E0621 05:05:06.446051 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.446227 kubelet[2681]: E0621 05:05:06.446210 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.446227 kubelet[2681]: W0621 05:05:06.446223 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.446270 kubelet[2681]: E0621 05:05:06.446250 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.446403 kubelet[2681]: E0621 05:05:06.446388 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.446403 kubelet[2681]: W0621 05:05:06.446400 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.446449 kubelet[2681]: E0621 05:05:06.446411 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.446649 kubelet[2681]: E0621 05:05:06.446633 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.446649 kubelet[2681]: W0621 05:05:06.446645 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.446705 kubelet[2681]: E0621 05:05:06.446656 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.446844 kubelet[2681]: E0621 05:05:06.446829 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.446844 kubelet[2681]: W0621 05:05:06.446841 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.446903 kubelet[2681]: E0621 05:05:06.446852 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.447047 kubelet[2681]: E0621 05:05:06.447032 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.447047 kubelet[2681]: W0621 05:05:06.447045 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.447104 kubelet[2681]: E0621 05:05:06.447055 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.546235 kubelet[2681]: E0621 05:05:06.546188 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.546235 kubelet[2681]: W0621 05:05:06.546221 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.546235 kubelet[2681]: E0621 05:05:06.546249 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.546564 kubelet[2681]: E0621 05:05:06.546545 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.546564 kubelet[2681]: W0621 05:05:06.546561 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.546638 kubelet[2681]: E0621 05:05:06.546580 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.546817 kubelet[2681]: E0621 05:05:06.546796 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.546868 kubelet[2681]: W0621 05:05:06.546828 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.546868 kubelet[2681]: E0621 05:05:06.546845 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.547051 kubelet[2681]: E0621 05:05:06.547033 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.547051 kubelet[2681]: W0621 05:05:06.547047 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.547102 kubelet[2681]: E0621 05:05:06.547065 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.547372 kubelet[2681]: E0621 05:05:06.547343 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.547409 kubelet[2681]: W0621 05:05:06.547370 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.547409 kubelet[2681]: E0621 05:05:06.547403 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.547608 kubelet[2681]: E0621 05:05:06.547594 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.547608 kubelet[2681]: W0621 05:05:06.547604 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.547656 kubelet[2681]: E0621 05:05:06.547617 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.547807 kubelet[2681]: E0621 05:05:06.547792 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.547807 kubelet[2681]: W0621 05:05:06.547801 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.547850 kubelet[2681]: E0621 05:05:06.547814 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.548085 kubelet[2681]: E0621 05:05:06.548062 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.548085 kubelet[2681]: W0621 05:05:06.548080 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.548137 kubelet[2681]: E0621 05:05:06.548116 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.548280 kubelet[2681]: E0621 05:05:06.548266 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.548280 kubelet[2681]: W0621 05:05:06.548275 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.548322 kubelet[2681]: E0621 05:05:06.548308 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.548462 kubelet[2681]: E0621 05:05:06.548447 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.548462 kubelet[2681]: W0621 05:05:06.548457 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.548546 kubelet[2681]: E0621 05:05:06.548501 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.548665 kubelet[2681]: E0621 05:05:06.548651 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.548665 kubelet[2681]: W0621 05:05:06.548661 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.548707 kubelet[2681]: E0621 05:05:06.548689 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.548842 kubelet[2681]: E0621 05:05:06.548828 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.548842 kubelet[2681]: W0621 05:05:06.548837 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.548882 kubelet[2681]: E0621 05:05:06.548851 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.549085 kubelet[2681]: E0621 05:05:06.549063 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.549085 kubelet[2681]: W0621 05:05:06.549081 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.549133 kubelet[2681]: E0621 05:05:06.549096 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.549282 kubelet[2681]: E0621 05:05:06.549268 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.549282 kubelet[2681]: W0621 05:05:06.549278 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.549332 kubelet[2681]: E0621 05:05:06.549291 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.549464 kubelet[2681]: E0621 05:05:06.549451 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.549464 kubelet[2681]: W0621 05:05:06.549460 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.549526 kubelet[2681]: E0621 05:05:06.549510 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.549653 kubelet[2681]: E0621 05:05:06.549639 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.549653 kubelet[2681]: W0621 05:05:06.549648 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.549730 kubelet[2681]: E0621 05:05:06.549695 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.549856 kubelet[2681]: E0621 05:05:06.549840 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.549856 kubelet[2681]: W0621 05:05:06.549854 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.549903 kubelet[2681]: E0621 05:05:06.549889 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.550075 kubelet[2681]: E0621 05:05:06.550051 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.550075 kubelet[2681]: W0621 05:05:06.550064 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.550134 kubelet[2681]: E0621 05:05:06.550113 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.550272 kubelet[2681]: E0621 05:05:06.550257 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.550272 kubelet[2681]: W0621 05:05:06.550270 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.550313 kubelet[2681]: E0621 05:05:06.550285 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.550475 kubelet[2681]: E0621 05:05:06.550460 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.550475 kubelet[2681]: W0621 05:05:06.550472 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.550550 kubelet[2681]: E0621 05:05:06.550505 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.550760 kubelet[2681]: E0621 05:05:06.550741 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.550760 kubelet[2681]: W0621 05:05:06.550753 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.550807 kubelet[2681]: E0621 05:05:06.550767 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.550978 kubelet[2681]: E0621 05:05:06.550960 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.550978 kubelet[2681]: W0621 05:05:06.550971 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.551027 kubelet[2681]: E0621 05:05:06.550984 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.551197 kubelet[2681]: E0621 05:05:06.551179 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.551197 kubelet[2681]: W0621 05:05:06.551191 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.551249 kubelet[2681]: E0621 05:05:06.551205 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.551421 kubelet[2681]: E0621 05:05:06.551402 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.551421 kubelet[2681]: W0621 05:05:06.551414 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.551461 kubelet[2681]: E0621 05:05:06.551429 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.551734 kubelet[2681]: E0621 05:05:06.551715 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.551734 kubelet[2681]: W0621 05:05:06.551732 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.551808 kubelet[2681]: E0621 05:05:06.551743 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.558552 kubelet[2681]: E0621 05:05:06.558527 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.558552 kubelet[2681]: W0621 05:05:06.558551 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.558625 kubelet[2681]: E0621 05:05:06.558561 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.839270 kubelet[2681]: E0621 05:05:06.839227 2681 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jun 21 05:05:06.839434 kubelet[2681]: E0621 05:05:06.839316 2681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e17996fa-e340-40b2-8d8e-8f426f11f155-tigera-ca-bundle podName:e17996fa-e340-40b2-8d8e-8f426f11f155 nodeName:}" failed. No retries permitted until 2025-06-21 05:05:07.33929322 +0000 UTC m=+16.382854825 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/e17996fa-e340-40b2-8d8e-8f426f11f155-tigera-ca-bundle") pod "calico-typha-54bcb67b96-r9vzr" (UID: "e17996fa-e340-40b2-8d8e-8f426f11f155") : failed to sync configmap cache: timed out waiting for the condition Jun 21 05:05:06.850469 kubelet[2681]: E0621 05:05:06.850441 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.850469 kubelet[2681]: W0621 05:05:06.850462 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.850561 kubelet[2681]: E0621 05:05:06.850497 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:06.951903 kubelet[2681]: E0621 05:05:06.951869 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:06.951903 kubelet[2681]: W0621 05:05:06.951888 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:06.952005 kubelet[2681]: E0621 05:05:06.951910 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:07.052448 kubelet[2681]: E0621 05:05:07.052398 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:07.052448 kubelet[2681]: W0621 05:05:07.052421 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:07.052448 kubelet[2681]: E0621 05:05:07.052443 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:07.054602 kubelet[2681]: E0621 05:05:07.054566 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:07.054602 kubelet[2681]: W0621 05:05:07.054594 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:07.054696 kubelet[2681]: E0621 05:05:07.054611 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:07.153333 kubelet[2681]: E0621 05:05:07.153220 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:07.153333 kubelet[2681]: W0621 05:05:07.153241 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:07.153333 kubelet[2681]: E0621 05:05:07.153260 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:07.254692 kubelet[2681]: E0621 05:05:07.254655 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:07.254692 kubelet[2681]: W0621 05:05:07.254674 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:07.254692 kubelet[2681]: E0621 05:05:07.254693 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:07.300635 containerd[1565]: time="2025-06-21T05:05:07.300556211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qr2vh,Uid:b434b871-006f-4585-bdab-335b26ecc539,Namespace:calico-system,Attempt:0,}" Jun 21 05:05:07.356204 kubelet[2681]: E0621 05:05:07.356171 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:07.356204 kubelet[2681]: W0621 05:05:07.356197 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:07.356302 kubelet[2681]: E0621 05:05:07.356219 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:07.356480 kubelet[2681]: E0621 05:05:07.356461 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:07.356480 kubelet[2681]: W0621 05:05:07.356472 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:07.356556 kubelet[2681]: E0621 05:05:07.356482 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:07.356780 kubelet[2681]: E0621 05:05:07.356741 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:07.356780 kubelet[2681]: W0621 05:05:07.356769 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:07.356856 kubelet[2681]: E0621 05:05:07.356789 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:07.356992 kubelet[2681]: E0621 05:05:07.356980 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:07.356992 kubelet[2681]: W0621 05:05:07.356988 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:07.357062 kubelet[2681]: E0621 05:05:07.356996 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:07.357228 kubelet[2681]: E0621 05:05:07.357207 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:07.357228 kubelet[2681]: W0621 05:05:07.357217 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:07.357228 kubelet[2681]: E0621 05:05:07.357225 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:07.358168 kubelet[2681]: E0621 05:05:07.358144 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 21 05:05:07.358168 kubelet[2681]: W0621 05:05:07.358155 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 21 05:05:07.358168 kubelet[2681]: E0621 05:05:07.358165 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 21 05:05:07.422988 containerd[1565]: time="2025-06-21T05:05:07.422464117Z" level=info msg="connecting to shim b763aeb1e60a26488ba33867d5167782ac9d4a473f04ca9c2efebe6d059f1dc8" address="unix:///run/containerd/s/4ddfcd1baf73d156b044fc681cd328351efccf44098273de847bc140fbb885cb" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:05:07.453644 systemd[1]: Started cri-containerd-b763aeb1e60a26488ba33867d5167782ac9d4a473f04ca9c2efebe6d059f1dc8.scope - libcontainer container b763aeb1e60a26488ba33867d5167782ac9d4a473f04ca9c2efebe6d059f1dc8. Jun 21 05:05:07.513756 kubelet[2681]: E0621 05:05:07.513712 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 05:05:07.514230 containerd[1565]: time="2025-06-21T05:05:07.514174463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54bcb67b96-r9vzr,Uid:e17996fa-e340-40b2-8d8e-8f426f11f155,Namespace:calico-system,Attempt:0,}" Jun 21 05:05:07.521776 containerd[1565]: time="2025-06-21T05:05:07.521731705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qr2vh,Uid:b434b871-006f-4585-bdab-335b26ecc539,Namespace:calico-system,Attempt:0,} returns sandbox id \"b763aeb1e60a26488ba33867d5167782ac9d4a473f04ca9c2efebe6d059f1dc8\"" Jun 21 05:05:07.523570 containerd[1565]: time="2025-06-21T05:05:07.523369262Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\"" Jun 21 05:05:07.544299 containerd[1565]: time="2025-06-21T05:05:07.544254269Z" level=info msg="connecting to shim 3cf793a1d6a25ee78d9f64c6d7dbc40b623fd987128e32b0d4dc3b60c4ddf655" address="unix:///run/containerd/s/cbcd527ed1008556c47e2bc67db9168a3808561b10a499c338738927d913faab" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:05:07.573606 systemd[1]: Started cri-containerd-3cf793a1d6a25ee78d9f64c6d7dbc40b623fd987128e32b0d4dc3b60c4ddf655.scope - libcontainer container 3cf793a1d6a25ee78d9f64c6d7dbc40b623fd987128e32b0d4dc3b60c4ddf655. Jun 21 05:05:07.614778 containerd[1565]: time="2025-06-21T05:05:07.614743358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54bcb67b96-r9vzr,Uid:e17996fa-e340-40b2-8d8e-8f426f11f155,Namespace:calico-system,Attempt:0,} returns sandbox id \"3cf793a1d6a25ee78d9f64c6d7dbc40b623fd987128e32b0d4dc3b60c4ddf655\"" Jun 21 05:05:07.615346 kubelet[2681]: E0621 05:05:07.615321 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 05:05:08.244556 kubelet[2681]: E0621 05:05:08.244475 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gnpdd" podUID="3eee9492-ff67-4a0d-a49e-690cbd0112e0" Jun 21 05:05:08.951522 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2560924740.mount: Deactivated successfully. Jun 21 05:05:09.101726 containerd[1565]: time="2025-06-21T05:05:09.101664406Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:05:09.102425 containerd[1565]: time="2025-06-21T05:05:09.102395325Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1: active requests=0, bytes read=5934468" Jun 21 05:05:09.103585 containerd[1565]: time="2025-06-21T05:05:09.103556290Z" level=info msg="ImageCreate event name:\"sha256:2eb0d46821080fd806e1b7f8ca42889800fcb3f0af912b6fbb09a13b21454d48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:05:09.105609 containerd[1565]: time="2025-06-21T05:05:09.105554186Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b9246fe925ee5b8a5c7dfe1d1c3c29063cbfd512663088b135a015828c20401e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:05:09.106131 containerd[1565]: time="2025-06-21T05:05:09.106099953Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" with image id \"sha256:2eb0d46821080fd806e1b7f8ca42889800fcb3f0af912b6fbb09a13b21454d48\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b9246fe925ee5b8a5c7dfe1d1c3c29063cbfd512663088b135a015828c20401e\", size \"5934290\" in 1.582699443s" Jun 21 05:05:09.106178 containerd[1565]: time="2025-06-21T05:05:09.106139148Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" returns image reference \"sha256:2eb0d46821080fd806e1b7f8ca42889800fcb3f0af912b6fbb09a13b21454d48\"" Jun 21 05:05:09.107606 containerd[1565]: time="2025-06-21T05:05:09.107569726Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.1\"" Jun 21 05:05:09.108465 containerd[1565]: time="2025-06-21T05:05:09.108432085Z" level=info msg="CreateContainer within sandbox \"b763aeb1e60a26488ba33867d5167782ac9d4a473f04ca9c2efebe6d059f1dc8\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 21 05:05:09.117340 containerd[1565]: time="2025-06-21T05:05:09.117307486Z" level=info msg="Container eb025b9dbf99d9109b183a352945c81507357d5754b421a09799bdbf82a898d3: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:05:09.125511 containerd[1565]: time="2025-06-21T05:05:09.125448573Z" level=info msg="CreateContainer within sandbox \"b763aeb1e60a26488ba33867d5167782ac9d4a473f04ca9c2efebe6d059f1dc8\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"eb025b9dbf99d9109b183a352945c81507357d5754b421a09799bdbf82a898d3\"" Jun 21 05:05:09.126138 containerd[1565]: time="2025-06-21T05:05:09.125949114Z" level=info msg="StartContainer for \"eb025b9dbf99d9109b183a352945c81507357d5754b421a09799bdbf82a898d3\"" Jun 21 05:05:09.127208 containerd[1565]: time="2025-06-21T05:05:09.127186655Z" level=info msg="connecting to shim eb025b9dbf99d9109b183a352945c81507357d5754b421a09799bdbf82a898d3" address="unix:///run/containerd/s/4ddfcd1baf73d156b044fc681cd328351efccf44098273de847bc140fbb885cb" protocol=ttrpc version=3 Jun 21 05:05:09.150636 systemd[1]: Started cri-containerd-eb025b9dbf99d9109b183a352945c81507357d5754b421a09799bdbf82a898d3.scope - libcontainer container eb025b9dbf99d9109b183a352945c81507357d5754b421a09799bdbf82a898d3. Jun 21 05:05:09.201720 systemd[1]: cri-containerd-eb025b9dbf99d9109b183a352945c81507357d5754b421a09799bdbf82a898d3.scope: Deactivated successfully. Jun 21 05:05:09.203814 containerd[1565]: time="2025-06-21T05:05:09.203779099Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eb025b9dbf99d9109b183a352945c81507357d5754b421a09799bdbf82a898d3\" id:\"eb025b9dbf99d9109b183a352945c81507357d5754b421a09799bdbf82a898d3\" pid:3303 exited_at:{seconds:1750482309 nanos:203190461}" Jun 21 05:05:09.226011 containerd[1565]: time="2025-06-21T05:05:09.225930052Z" level=info msg="received exit event container_id:\"eb025b9dbf99d9109b183a352945c81507357d5754b421a09799bdbf82a898d3\" id:\"eb025b9dbf99d9109b183a352945c81507357d5754b421a09799bdbf82a898d3\" pid:3303 exited_at:{seconds:1750482309 nanos:203190461}" Jun 21 05:05:09.227702 containerd[1565]: time="2025-06-21T05:05:09.227676390Z" level=info msg="StartContainer for \"eb025b9dbf99d9109b183a352945c81507357d5754b421a09799bdbf82a898d3\" returns successfully" Jun 21 05:05:09.314618 update_engine[1558]: I20250621 05:05:09.314541 1558 update_attempter.cc:509] Updating boot flags... Jun 21 05:05:09.930880 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eb025b9dbf99d9109b183a352945c81507357d5754b421a09799bdbf82a898d3-rootfs.mount: Deactivated successfully. Jun 21 05:05:10.244119 kubelet[2681]: E0621 05:05:10.244051 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gnpdd" podUID="3eee9492-ff67-4a0d-a49e-690cbd0112e0" Jun 21 05:05:10.831578 containerd[1565]: time="2025-06-21T05:05:10.831477521Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:05:10.832377 containerd[1565]: time="2025-06-21T05:05:10.832344257Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.1: active requests=0, bytes read=33735047" Jun 21 05:05:10.833636 containerd[1565]: time="2025-06-21T05:05:10.833593889Z" level=info msg="ImageCreate event name:\"sha256:11d920cd1d8c935bdf3cb40dd9e67f22c3624df627bdd58cf6d0e503230688d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:05:10.835349 containerd[1565]: time="2025-06-21T05:05:10.835313123Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f1edaa4eaa6349a958c409e0dab2d6ee7d1234e5f0eeefc9f508d0b1c9d7d0d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:05:10.835885 containerd[1565]: time="2025-06-21T05:05:10.835841486Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.1\" with image id \"sha256:11d920cd1d8c935bdf3cb40dd9e67f22c3624df627bdd58cf6d0e503230688d7\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f1edaa4eaa6349a958c409e0dab2d6ee7d1234e5f0eeefc9f508d0b1c9d7d0d1\", size \"35227742\" in 1.728214491s" Jun 21 05:05:10.835885 containerd[1565]: time="2025-06-21T05:05:10.835883185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.1\" returns image reference \"sha256:11d920cd1d8c935bdf3cb40dd9e67f22c3624df627bdd58cf6d0e503230688d7\"" Jun 21 05:05:10.836949 containerd[1565]: time="2025-06-21T05:05:10.836924131Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.1\"" Jun 21 05:05:10.844826 containerd[1565]: time="2025-06-21T05:05:10.844717856Z" level=info msg="CreateContainer within sandbox \"3cf793a1d6a25ee78d9f64c6d7dbc40b623fd987128e32b0d4dc3b60c4ddf655\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 21 05:05:10.854038 containerd[1565]: time="2025-06-21T05:05:10.853993054Z" level=info msg="Container 37ef4dbc73b359ce9335559c15040cd8676f5a541e5b51f6dfc65f3b12e979c1: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:05:10.862681 containerd[1565]: time="2025-06-21T05:05:10.862618669Z" level=info msg="CreateContainer within sandbox \"3cf793a1d6a25ee78d9f64c6d7dbc40b623fd987128e32b0d4dc3b60c4ddf655\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"37ef4dbc73b359ce9335559c15040cd8676f5a541e5b51f6dfc65f3b12e979c1\"" Jun 21 05:05:10.863322 containerd[1565]: time="2025-06-21T05:05:10.863278041Z" level=info msg="StartContainer for \"37ef4dbc73b359ce9335559c15040cd8676f5a541e5b51f6dfc65f3b12e979c1\"" Jun 21 05:05:10.864719 containerd[1565]: time="2025-06-21T05:05:10.864348111Z" level=info msg="connecting to shim 37ef4dbc73b359ce9335559c15040cd8676f5a541e5b51f6dfc65f3b12e979c1" address="unix:///run/containerd/s/cbcd527ed1008556c47e2bc67db9168a3808561b10a499c338738927d913faab" protocol=ttrpc version=3 Jun 21 05:05:10.888627 systemd[1]: Started cri-containerd-37ef4dbc73b359ce9335559c15040cd8676f5a541e5b51f6dfc65f3b12e979c1.scope - libcontainer container 37ef4dbc73b359ce9335559c15040cd8676f5a541e5b51f6dfc65f3b12e979c1. Jun 21 05:05:10.939298 containerd[1565]: time="2025-06-21T05:05:10.939246927Z" level=info msg="StartContainer for \"37ef4dbc73b359ce9335559c15040cd8676f5a541e5b51f6dfc65f3b12e979c1\" returns successfully" Jun 21 05:05:11.303797 kubelet[2681]: E0621 05:05:11.303761 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 05:05:11.314027 kubelet[2681]: I0621 05:05:11.313966 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-54bcb67b96-r9vzr" podStartSLOduration=3.092972007 podStartE2EDuration="6.313944813s" podCreationTimestamp="2025-06-21 05:05:05 +0000 UTC" firstStartedPulling="2025-06-21 05:05:07.615751436 +0000 UTC m=+16.659313041" lastFinishedPulling="2025-06-21 05:05:10.836724222 +0000 UTC m=+19.880285847" observedRunningTime="2025-06-21 05:05:11.313946386 +0000 UTC m=+20.357507991" watchObservedRunningTime="2025-06-21 05:05:11.313944813 +0000 UTC m=+20.357506418" Jun 21 05:05:12.243944 kubelet[2681]: E0621 05:05:12.243896 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gnpdd" podUID="3eee9492-ff67-4a0d-a49e-690cbd0112e0" Jun 21 05:05:12.305689 kubelet[2681]: I0621 05:05:12.305649 2681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 21 05:05:12.306196 kubelet[2681]: E0621 05:05:12.306031 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 05:05:14.244798 kubelet[2681]: E0621 05:05:14.244742 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gnpdd" podUID="3eee9492-ff67-4a0d-a49e-690cbd0112e0" Jun 21 05:05:14.466281 containerd[1565]: time="2025-06-21T05:05:14.466215439Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:05:14.466982 containerd[1565]: time="2025-06-21T05:05:14.466953417Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.1: active requests=0, bytes read=70405879" Jun 21 05:05:14.468313 containerd[1565]: time="2025-06-21T05:05:14.468272994Z" level=info msg="ImageCreate event name:\"sha256:0d2cd976ff6ee711927e02b1c2ba0b532275ff85d5dc05fc413cc660d5bec68e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:05:14.470171 containerd[1565]: time="2025-06-21T05:05:14.470122766Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:930b33311eec7523e36d95977281681d74d33efff937302b26516b2bc03a5fe9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:05:14.470742 containerd[1565]: time="2025-06-21T05:05:14.470706351Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.1\" with image id \"sha256:0d2cd976ff6ee711927e02b1c2ba0b532275ff85d5dc05fc413cc660d5bec68e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:930b33311eec7523e36d95977281681d74d33efff937302b26516b2bc03a5fe9\", size \"71898582\" in 3.633751972s" Jun 21 05:05:14.470777 containerd[1565]: time="2025-06-21T05:05:14.470746027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.1\" returns image reference \"sha256:0d2cd976ff6ee711927e02b1c2ba0b532275ff85d5dc05fc413cc660d5bec68e\"" Jun 21 05:05:14.472622 containerd[1565]: time="2025-06-21T05:05:14.472596791Z" level=info msg="CreateContainer within sandbox \"b763aeb1e60a26488ba33867d5167782ac9d4a473f04ca9c2efebe6d059f1dc8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 21 05:05:14.483256 containerd[1565]: time="2025-06-21T05:05:14.483193694Z" level=info msg="Container 715d0fbad2793667f6ef6d83b5898f4256214a33ac23a020da19c90a43a64e85: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:05:14.492585 containerd[1565]: time="2025-06-21T05:05:14.492538576Z" level=info msg="CreateContainer within sandbox \"b763aeb1e60a26488ba33867d5167782ac9d4a473f04ca9c2efebe6d059f1dc8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"715d0fbad2793667f6ef6d83b5898f4256214a33ac23a020da19c90a43a64e85\"" Jun 21 05:05:14.493428 containerd[1565]: time="2025-06-21T05:05:14.493062598Z" level=info msg="StartContainer for \"715d0fbad2793667f6ef6d83b5898f4256214a33ac23a020da19c90a43a64e85\"" Jun 21 05:05:14.494547 containerd[1565]: time="2025-06-21T05:05:14.494519988Z" level=info msg="connecting to shim 715d0fbad2793667f6ef6d83b5898f4256214a33ac23a020da19c90a43a64e85" address="unix:///run/containerd/s/4ddfcd1baf73d156b044fc681cd328351efccf44098273de847bc140fbb885cb" protocol=ttrpc version=3 Jun 21 05:05:14.518653 systemd[1]: Started cri-containerd-715d0fbad2793667f6ef6d83b5898f4256214a33ac23a020da19c90a43a64e85.scope - libcontainer container 715d0fbad2793667f6ef6d83b5898f4256214a33ac23a020da19c90a43a64e85. Jun 21 05:05:14.562062 containerd[1565]: time="2025-06-21T05:05:14.561998153Z" level=info msg="StartContainer for \"715d0fbad2793667f6ef6d83b5898f4256214a33ac23a020da19c90a43a64e85\" returns successfully" Jun 21 05:05:15.657442 systemd[1]: cri-containerd-715d0fbad2793667f6ef6d83b5898f4256214a33ac23a020da19c90a43a64e85.scope: Deactivated successfully. Jun 21 05:05:15.658519 systemd[1]: cri-containerd-715d0fbad2793667f6ef6d83b5898f4256214a33ac23a020da19c90a43a64e85.scope: Consumed 585ms CPU time, 175.6M memory peak, 3.6M read from disk, 171.2M written to disk. Jun 21 05:05:15.659883 containerd[1565]: time="2025-06-21T05:05:15.659833715Z" level=info msg="received exit event container_id:\"715d0fbad2793667f6ef6d83b5898f4256214a33ac23a020da19c90a43a64e85\" id:\"715d0fbad2793667f6ef6d83b5898f4256214a33ac23a020da19c90a43a64e85\" pid:3423 exited_at:{seconds:1750482315 nanos:659632353}" Jun 21 05:05:15.660233 containerd[1565]: time="2025-06-21T05:05:15.659915449Z" level=info msg="TaskExit event in podsandbox handler container_id:\"715d0fbad2793667f6ef6d83b5898f4256214a33ac23a020da19c90a43a64e85\" id:\"715d0fbad2793667f6ef6d83b5898f4256214a33ac23a020da19c90a43a64e85\" pid:3423 exited_at:{seconds:1750482315 nanos:659632353}" Jun 21 05:05:15.685441 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-715d0fbad2793667f6ef6d83b5898f4256214a33ac23a020da19c90a43a64e85-rootfs.mount: Deactivated successfully. Jun 21 05:05:15.750708 kubelet[2681]: I0621 05:05:15.750626 2681 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jun 21 05:05:15.804434 systemd[1]: Created slice kubepods-burstable-pod240bb991_39e3_416d_9e70_c9d62b670e47.slice - libcontainer container kubepods-burstable-pod240bb991_39e3_416d_9e70_c9d62b670e47.slice. Jun 21 05:05:15.810985 kubelet[2681]: I0621 05:05:15.810270 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgs4c\" (UniqueName: \"kubernetes.io/projected/daacbc27-7e95-4a0a-8d82-158302d37be1-kube-api-access-rgs4c\") pod \"coredns-668d6bf9bc-96dq5\" (UID: \"daacbc27-7e95-4a0a-8d82-158302d37be1\") " pod="kube-system/coredns-668d6bf9bc-96dq5" Jun 21 05:05:15.811510 kubelet[2681]: I0621 05:05:15.811227 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3e2bfcd5-cea0-471e-a938-55011eaffd6d-calico-apiserver-certs\") pod \"calico-apiserver-b855447fc-sd4n7\" (UID: \"3e2bfcd5-cea0-471e-a938-55011eaffd6d\") " pod="calico-apiserver/calico-apiserver-b855447fc-sd4n7" Jun 21 05:05:15.811510 kubelet[2681]: I0621 05:05:15.811266 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttg7d\" (UniqueName: \"kubernetes.io/projected/240bb991-39e3-416d-9e70-c9d62b670e47-kube-api-access-ttg7d\") pod \"coredns-668d6bf9bc-dljws\" (UID: \"240bb991-39e3-416d-9e70-c9d62b670e47\") " pod="kube-system/coredns-668d6bf9bc-dljws" Jun 21 05:05:15.811510 kubelet[2681]: I0621 05:05:15.811297 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh85r\" (UniqueName: \"kubernetes.io/projected/3e2bfcd5-cea0-471e-a938-55011eaffd6d-kube-api-access-mh85r\") pod \"calico-apiserver-b855447fc-sd4n7\" (UID: \"3e2bfcd5-cea0-471e-a938-55011eaffd6d\") " pod="calico-apiserver/calico-apiserver-b855447fc-sd4n7" Jun 21 05:05:15.811510 kubelet[2681]: I0621 05:05:15.811358 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6c4840fe-1697-4038-8594-535953a4cc31-whisker-backend-key-pair\") pod \"whisker-9c4976d8c-ztk9t\" (UID: \"6c4840fe-1697-4038-8594-535953a4cc31\") " pod="calico-system/whisker-9c4976d8c-ztk9t" Jun 21 05:05:15.811510 kubelet[2681]: I0621 05:05:15.811396 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/daacbc27-7e95-4a0a-8d82-158302d37be1-config-volume\") pod \"coredns-668d6bf9bc-96dq5\" (UID: \"daacbc27-7e95-4a0a-8d82-158302d37be1\") " pod="kube-system/coredns-668d6bf9bc-96dq5" Jun 21 05:05:15.811806 kubelet[2681]: I0621 05:05:15.811426 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1c5ebf5d-9ea4-4847-b325-026a751564b0-calico-apiserver-certs\") pod \"calico-apiserver-b855447fc-gjvrv\" (UID: \"1c5ebf5d-9ea4-4847-b325-026a751564b0\") " pod="calico-apiserver/calico-apiserver-b855447fc-gjvrv" Jun 21 05:05:15.811806 kubelet[2681]: I0621 05:05:15.811447 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e769b160-ade4-402a-9f48-f153b80ddcf1-tigera-ca-bundle\") pod \"calico-kube-controllers-7b69c775df-jfqx8\" (UID: \"e769b160-ade4-402a-9f48-f153b80ddcf1\") " pod="calico-system/calico-kube-controllers-7b69c775df-jfqx8" Jun 21 05:05:15.811806 kubelet[2681]: I0621 05:05:15.811474 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c4840fe-1697-4038-8594-535953a4cc31-whisker-ca-bundle\") pod \"whisker-9c4976d8c-ztk9t\" (UID: \"6c4840fe-1697-4038-8594-535953a4cc31\") " pod="calico-system/whisker-9c4976d8c-ztk9t" Jun 21 05:05:15.812109 kubelet[2681]: I0621 05:05:15.811956 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv4h9\" (UniqueName: \"kubernetes.io/projected/6c4840fe-1697-4038-8594-535953a4cc31-kube-api-access-gv4h9\") pod \"whisker-9c4976d8c-ztk9t\" (UID: \"6c4840fe-1697-4038-8594-535953a4cc31\") " pod="calico-system/whisker-9c4976d8c-ztk9t" Jun 21 05:05:15.812109 kubelet[2681]: I0621 05:05:15.811991 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/240bb991-39e3-416d-9e70-c9d62b670e47-config-volume\") pod \"coredns-668d6bf9bc-dljws\" (UID: \"240bb991-39e3-416d-9e70-c9d62b670e47\") " pod="kube-system/coredns-668d6bf9bc-dljws" Jun 21 05:05:15.812109 kubelet[2681]: I0621 05:05:15.812021 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqfm6\" (UniqueName: \"kubernetes.io/projected/1c5ebf5d-9ea4-4847-b325-026a751564b0-kube-api-access-dqfm6\") pod \"calico-apiserver-b855447fc-gjvrv\" (UID: \"1c5ebf5d-9ea4-4847-b325-026a751564b0\") " pod="calico-apiserver/calico-apiserver-b855447fc-gjvrv" Jun 21 05:05:15.812109 kubelet[2681]: I0621 05:05:15.812054 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc6fh\" (UniqueName: \"kubernetes.io/projected/e769b160-ade4-402a-9f48-f153b80ddcf1-kube-api-access-nc6fh\") pod \"calico-kube-controllers-7b69c775df-jfqx8\" (UID: \"e769b160-ade4-402a-9f48-f153b80ddcf1\") " pod="calico-system/calico-kube-controllers-7b69c775df-jfqx8" Jun 21 05:05:15.814689 systemd[1]: Created slice kubepods-besteffort-pode769b160_ade4_402a_9f48_f153b80ddcf1.slice - libcontainer container kubepods-besteffort-pode769b160_ade4_402a_9f48_f153b80ddcf1.slice. Jun 21 05:05:15.824820 systemd[1]: Created slice kubepods-besteffort-pod3e2bfcd5_cea0_471e_a938_55011eaffd6d.slice - libcontainer container kubepods-besteffort-pod3e2bfcd5_cea0_471e_a938_55011eaffd6d.slice. Jun 21 05:05:15.828477 systemd[1]: Created slice kubepods-besteffort-pod1c5ebf5d_9ea4_4847_b325_026a751564b0.slice - libcontainer container kubepods-besteffort-pod1c5ebf5d_9ea4_4847_b325_026a751564b0.slice. Jun 21 05:05:15.837219 systemd[1]: Created slice kubepods-besteffort-pod6c4840fe_1697_4038_8594_535953a4cc31.slice - libcontainer container kubepods-besteffort-pod6c4840fe_1697_4038_8594_535953a4cc31.slice. Jun 21 05:05:15.844717 systemd[1]: Created slice kubepods-besteffort-pod55148328_07b8_4d25_90b2_fa374be29f23.slice - libcontainer container kubepods-besteffort-pod55148328_07b8_4d25_90b2_fa374be29f23.slice. Jun 21 05:05:15.853698 systemd[1]: Created slice kubepods-burstable-poddaacbc27_7e95_4a0a_8d82_158302d37be1.slice - libcontainer container kubepods-burstable-poddaacbc27_7e95_4a0a_8d82_158302d37be1.slice. Jun 21 05:05:15.913475 kubelet[2681]: I0621 05:05:15.912505 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/55148328-07b8-4d25-90b2-fa374be29f23-goldmane-key-pair\") pod \"goldmane-5bd85449d4-z8jd5\" (UID: \"55148328-07b8-4d25-90b2-fa374be29f23\") " pod="calico-system/goldmane-5bd85449d4-z8jd5" Jun 21 05:05:15.913475 kubelet[2681]: I0621 05:05:15.912703 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g26kh\" (UniqueName: \"kubernetes.io/projected/55148328-07b8-4d25-90b2-fa374be29f23-kube-api-access-g26kh\") pod \"goldmane-5bd85449d4-z8jd5\" (UID: \"55148328-07b8-4d25-90b2-fa374be29f23\") " pod="calico-system/goldmane-5bd85449d4-z8jd5" Jun 21 05:05:15.913475 kubelet[2681]: I0621 05:05:15.912771 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55148328-07b8-4d25-90b2-fa374be29f23-goldmane-ca-bundle\") pod \"goldmane-5bd85449d4-z8jd5\" (UID: \"55148328-07b8-4d25-90b2-fa374be29f23\") " pod="calico-system/goldmane-5bd85449d4-z8jd5" Jun 21 05:05:15.913475 kubelet[2681]: I0621 05:05:15.912857 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55148328-07b8-4d25-90b2-fa374be29f23-config\") pod \"goldmane-5bd85449d4-z8jd5\" (UID: \"55148328-07b8-4d25-90b2-fa374be29f23\") " pod="calico-system/goldmane-5bd85449d4-z8jd5" Jun 21 05:05:16.112140 kubelet[2681]: E0621 05:05:16.112091 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 05:05:16.112859 containerd[1565]: time="2025-06-21T05:05:16.112794681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dljws,Uid:240bb991-39e3-416d-9e70-c9d62b670e47,Namespace:kube-system,Attempt:0,}" Jun 21 05:05:16.121546 containerd[1565]: time="2025-06-21T05:05:16.121506137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b69c775df-jfqx8,Uid:e769b160-ade4-402a-9f48-f153b80ddcf1,Namespace:calico-system,Attempt:0,}" Jun 21 05:05:16.133401 containerd[1565]: time="2025-06-21T05:05:16.133363251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b855447fc-sd4n7,Uid:3e2bfcd5-cea0-471e-a938-55011eaffd6d,Namespace:calico-apiserver,Attempt:0,}" Jun 21 05:05:16.134326 containerd[1565]: time="2025-06-21T05:05:16.134308378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b855447fc-gjvrv,Uid:1c5ebf5d-9ea4-4847-b325-026a751564b0,Namespace:calico-apiserver,Attempt:0,}" Jun 21 05:05:16.142802 containerd[1565]: time="2025-06-21T05:05:16.142699988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9c4976d8c-ztk9t,Uid:6c4840fe-1697-4038-8594-535953a4cc31,Namespace:calico-system,Attempt:0,}" Jun 21 05:05:16.150160 containerd[1565]: time="2025-06-21T05:05:16.150042295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-z8jd5,Uid:55148328-07b8-4d25-90b2-fa374be29f23,Namespace:calico-system,Attempt:0,}" Jun 21 05:05:16.157293 kubelet[2681]: E0621 05:05:16.157252 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 05:05:16.161283 containerd[1565]: time="2025-06-21T05:05:16.161164639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-96dq5,Uid:daacbc27-7e95-4a0a-8d82-158302d37be1,Namespace:kube-system,Attempt:0,}" Jun 21 05:05:16.252260 containerd[1565]: time="2025-06-21T05:05:16.252204534Z" level=error msg="Failed to destroy network for sandbox \"972061977109ad95fdb6812710ce70534f51cf403bf487d3839d0bbdfe075b66\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:05:16.253294 containerd[1565]: time="2025-06-21T05:05:16.253253106Z" level=error msg="Failed to destroy network for sandbox \"046638e0ebd9e7370d2de480b292a4545c6114a70a0f5f8618468b03901ed8f1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:05:16.254311 containerd[1565]: time="2025-06-21T05:05:16.254218022Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dljws,Uid:240bb991-39e3-416d-9e70-c9d62b670e47,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"972061977109ad95fdb6812710ce70534f51cf403bf487d3839d0bbdfe075b66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:05:16.255259 systemd[1]: Created slice kubepods-besteffort-pod3eee9492_ff67_4a0d_a49e_690cbd0112e0.slice - libcontainer container kubepods-besteffort-pod3eee9492_ff67_4a0d_a49e_690cbd0112e0.slice. Jun 21 05:05:16.256130 containerd[1565]: time="2025-06-21T05:05:16.256077919Z" level=error msg="Failed to destroy network for sandbox \"602f2c6483327502562f648a31b2ad3d25bb490597dc3d970d06c3386a88d650\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:05:16.257528 containerd[1565]: time="2025-06-21T05:05:16.256892951Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b855447fc-gjvrv,Uid:1c5ebf5d-9ea4-4847-b325-026a751564b0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"046638e0ebd9e7370d2de480b292a4545c6114a70a0f5f8618468b03901ed8f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:05:16.258258 containerd[1565]: time="2025-06-21T05:05:16.258224289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gnpdd,Uid:3eee9492-ff67-4a0d-a49e-690cbd0112e0,Namespace:calico-system,Attempt:0,}" Jun 21 05:05:16.260397 containerd[1565]: time="2025-06-21T05:05:16.259609027Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b69c775df-jfqx8,Uid:e769b160-ade4-402a-9f48-f153b80ddcf1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"602f2c6483327502562f648a31b2ad3d25bb490597dc3d970d06c3386a88d650\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:05:16.272183 containerd[1565]: time="2025-06-21T05:05:16.272134626Z" level=error msg="Failed to destroy network for sandbox \"7399a62c03c2f0f9b28e7d4645a844cf999fa7fcf2050d441036ebb5dbd004bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:05:16.277519 containerd[1565]: time="2025-06-21T05:05:16.277347595Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b855447fc-sd4n7,Uid:3e2bfcd5-cea0-471e-a938-55011eaffd6d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7399a62c03c2f0f9b28e7d4645a844cf999fa7fcf2050d441036ebb5dbd004bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:05:16.278628 kubelet[2681]: E0621 05:05:16.278348 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"972061977109ad95fdb6812710ce70534f51cf403bf487d3839d0bbdfe075b66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:05:16.278628 kubelet[2681]: E0621 05:05:16.278469 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"602f2c6483327502562f648a31b2ad3d25bb490597dc3d970d06c3386a88d650\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:05:16.278628 kubelet[2681]: E0621 05:05:16.278411 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"046638e0ebd9e7370d2de480b292a4545c6114a70a0f5f8618468b03901ed8f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:05:16.278628 kubelet[2681]: E0621 05:05:16.278632 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"602f2c6483327502562f648a31b2ad3d25bb490597dc3d970d06c3386a88d650\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b69c775df-jfqx8" Jun 21 05:05:16.279358 kubelet[2681]: E0621 05:05:16.278638 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"046638e0ebd9e7370d2de480b292a4545c6114a70a0f5f8618468b03901ed8f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b855447fc-gjvrv" Jun 21 05:05:16.279358 kubelet[2681]: E0621 05:05:16.278656 2681 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"602f2c6483327502562f648a31b2ad3d25bb490597dc3d970d06c3386a88d650\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b69c775df-jfqx8" Jun 21 05:05:16.279358 kubelet[2681]: E0621 05:05:16.278665 2681 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"046638e0ebd9e7370d2de480b292a4545c6114a70a0f5f8618468b03901ed8f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b855447fc-gjvrv" Jun 21 05:05:16.279516 kubelet[2681]: E0621 05:05:16.278733 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7b69c775df-jfqx8_calico-system(e769b160-ade4-402a-9f48-f153b80ddcf1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7b69c775df-jfqx8_calico-system(e769b160-ade4-402a-9f48-f153b80ddcf1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"602f2c6483327502562f648a31b2ad3d25bb490597dc3d970d06c3386a88d650\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b69c775df-jfqx8" podUID="e769b160-ade4-402a-9f48-f153b80ddcf1" Jun 21 05:05:16.279516 kubelet[2681]: E0621 05:05:16.278452 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7399a62c03c2f0f9b28e7d4645a844cf999fa7fcf2050d441036ebb5dbd004bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:05:16.279516 kubelet[2681]: E0621 05:05:16.278800 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7399a62c03c2f0f9b28e7d4645a844cf999fa7fcf2050d441036ebb5dbd004bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b855447fc-sd4n7" Jun 21 05:05:16.279647 kubelet[2681]: E0621 05:05:16.278822 2681 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7399a62c03c2f0f9b28e7d4645a844cf999fa7fcf2050d441036ebb5dbd004bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b855447fc-sd4n7" Jun 21 05:05:16.279647 kubelet[2681]: E0621 05:05:16.278861 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b855447fc-sd4n7_calico-apiserver(3e2bfcd5-cea0-471e-a938-55011eaffd6d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b855447fc-sd4n7_calico-apiserver(3e2bfcd5-cea0-471e-a938-55011eaffd6d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7399a62c03c2f0f9b28e7d4645a844cf999fa7fcf2050d441036ebb5dbd004bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b855447fc-sd4n7" podUID="3e2bfcd5-cea0-471e-a938-55011eaffd6d" Jun 21 05:05:16.279647 kubelet[2681]: E0621 05:05:16.279516 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"972061977109ad95fdb6812710ce70534f51cf403bf487d3839d0bbdfe075b66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dljws" Jun 21 05:05:16.279769 kubelet[2681]: E0621 05:05:16.279541 2681 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"972061977109ad95fdb6812710ce70534f51cf403bf487d3839d0bbdfe075b66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dljws" Jun 21 05:05:16.279769 kubelet[2681]: E0621 05:05:16.279673 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-dljws_kube-system(240bb991-39e3-416d-9e70-c9d62b670e47)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-dljws_kube-system(240bb991-39e3-416d-9e70-c9d62b670e47)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"972061977109ad95fdb6812710ce70534f51cf403bf487d3839d0bbdfe075b66\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dljws" podUID="240bb991-39e3-416d-9e70-c9d62b670e47" Jun 21 05:05:16.279769 kubelet[2681]: E0621 05:05:16.278800 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b855447fc-gjvrv_calico-apiserver(1c5ebf5d-9ea4-4847-b325-026a751564b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b855447fc-gjvrv_calico-apiserver(1c5ebf5d-9ea4-4847-b325-026a751564b0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"046638e0ebd9e7370d2de480b292a4545c6114a70a0f5f8618468b03901ed8f1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b855447fc-gjvrv" podUID="1c5ebf5d-9ea4-4847-b325-026a751564b0" Jun 21 05:05:16.281184 containerd[1565]: time="2025-06-21T05:05:16.280656492Z" level=error msg="Failed to destroy network for sandbox \"c920918f721b568ef29f8df4540283a615cb74ab7d40466678925c3b15b77f74\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:05:16.283861 containerd[1565]: time="2025-06-21T05:05:16.283624736Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-z8jd5,Uid:55148328-07b8-4d25-90b2-fa374be29f23,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c920918f721b568ef29f8df4540283a615cb74ab7d40466678925c3b15b77f74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:05:16.284872 containerd[1565]: time="2025-06-21T05:05:16.284833071Z" level=error msg="Failed to destroy network for sandbox \"fc98e8da9b2fd01890d83fa514a682bd3039d2208d5dba5108aeb3be1317b54b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:05:16.286197 kubelet[2681]: E0621 05:05:16.285466 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c920918f721b568ef29f8df4540283a615cb74ab7d40466678925c3b15b77f74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:05:16.286197 kubelet[2681]: E0621 05:05:16.285558 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c920918f721b568ef29f8df4540283a615cb74ab7d40466678925c3b15b77f74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5bd85449d4-z8jd5" Jun 21 05:05:16.286197 kubelet[2681]: E0621 05:05:16.285576 2681 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c920918f721b568ef29f8df4540283a615cb74ab7d40466678925c3b15b77f74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5bd85449d4-z8jd5" Jun 21 05:05:16.286436 kubelet[2681]: E0621 05:05:16.285613 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5bd85449d4-z8jd5_calico-system(55148328-07b8-4d25-90b2-fa374be29f23)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5bd85449d4-z8jd5_calico-system(55148328-07b8-4d25-90b2-fa374be29f23)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c920918f721b568ef29f8df4540283a615cb74ab7d40466678925c3b15b77f74\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5bd85449d4-z8jd5" podUID="55148328-07b8-4d25-90b2-fa374be29f23" Jun 21 05:05:16.288226 containerd[1565]: time="2025-06-21T05:05:16.287965656Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9c4976d8c-ztk9t,Uid:6c4840fe-1697-4038-8594-535953a4cc31,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc98e8da9b2fd01890d83fa514a682bd3039d2208d5dba5108aeb3be1317b54b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:05:16.288305 kubelet[2681]: E0621 05:05:16.288136 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc98e8da9b2fd01890d83fa514a682bd3039d2208d5dba5108aeb3be1317b54b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:05:16.288305 kubelet[2681]: E0621 05:05:16.288178 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc98e8da9b2fd01890d83fa514a682bd3039d2208d5dba5108aeb3be1317b54b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-9c4976d8c-ztk9t" Jun 21 05:05:16.288305 kubelet[2681]: E0621 05:05:16.288199 2681 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc98e8da9b2fd01890d83fa514a682bd3039d2208d5dba5108aeb3be1317b54b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-9c4976d8c-ztk9t" Jun 21 05:05:16.288383 kubelet[2681]: E0621 05:05:16.288235 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-9c4976d8c-ztk9t_calico-system(6c4840fe-1697-4038-8594-535953a4cc31)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-9c4976d8c-ztk9t_calico-system(6c4840fe-1697-4038-8594-535953a4cc31)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc98e8da9b2fd01890d83fa514a682bd3039d2208d5dba5108aeb3be1317b54b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-9c4976d8c-ztk9t" podUID="6c4840fe-1697-4038-8594-535953a4cc31" Jun 21 05:05:16.307006 containerd[1565]: time="2025-06-21T05:05:16.306967031Z" level=error msg="Failed to destroy network for sandbox \"0ac4046b96db714f34994dde937da67cbf16cc694748dd5e7f12277570c8a3a5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:05:16.308221 containerd[1565]: time="2025-06-21T05:05:16.308188410Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-96dq5,Uid:daacbc27-7e95-4a0a-8d82-158302d37be1,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ac4046b96db714f34994dde937da67cbf16cc694748dd5e7f12277570c8a3a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:05:16.308424 kubelet[2681]: E0621 05:05:16.308353 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ac4046b96db714f34994dde937da67cbf16cc694748dd5e7f12277570c8a3a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:05:16.308424 kubelet[2681]: E0621 05:05:16.308399 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ac4046b96db714f34994dde937da67cbf16cc694748dd5e7f12277570c8a3a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-96dq5" Jun 21 05:05:16.308424 kubelet[2681]: E0621 05:05:16.308417 2681 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ac4046b96db714f34994dde937da67cbf16cc694748dd5e7f12277570c8a3a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-96dq5" Jun 21 05:05:16.309016 kubelet[2681]: E0621 05:05:16.308457 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-96dq5_kube-system(daacbc27-7e95-4a0a-8d82-158302d37be1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-96dq5_kube-system(daacbc27-7e95-4a0a-8d82-158302d37be1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0ac4046b96db714f34994dde937da67cbf16cc694748dd5e7f12277570c8a3a5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-96dq5" podUID="daacbc27-7e95-4a0a-8d82-158302d37be1" Jun 21 05:05:16.324872 containerd[1565]: time="2025-06-21T05:05:16.324827459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.1\"" Jun 21 05:05:16.356867 containerd[1565]: time="2025-06-21T05:05:16.356786670Z" level=error msg="Failed to destroy network for sandbox \"41a834e4c8cada32bdc81139ef082b577e943a1f3723f3b9b4c0e071a939a00e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:05:16.358239 containerd[1565]: time="2025-06-21T05:05:16.358198050Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gnpdd,Uid:3eee9492-ff67-4a0d-a49e-690cbd0112e0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"41a834e4c8cada32bdc81139ef082b577e943a1f3723f3b9b4c0e071a939a00e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:05:16.358510 kubelet[2681]: E0621 05:05:16.358447 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41a834e4c8cada32bdc81139ef082b577e943a1f3723f3b9b4c0e071a939a00e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 21 05:05:16.358574 kubelet[2681]: E0621 05:05:16.358528 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41a834e4c8cada32bdc81139ef082b577e943a1f3723f3b9b4c0e071a939a00e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gnpdd" Jun 21 05:05:16.358574 kubelet[2681]: E0621 05:05:16.358548 2681 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41a834e4c8cada32bdc81139ef082b577e943a1f3723f3b9b4c0e071a939a00e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gnpdd" Jun 21 05:05:16.358650 kubelet[2681]: E0621 05:05:16.358598 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gnpdd_calico-system(3eee9492-ff67-4a0d-a49e-690cbd0112e0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gnpdd_calico-system(3eee9492-ff67-4a0d-a49e-690cbd0112e0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"41a834e4c8cada32bdc81139ef082b577e943a1f3723f3b9b4c0e071a939a00e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gnpdd" podUID="3eee9492-ff67-4a0d-a49e-690cbd0112e0" Jun 21 05:05:24.492450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3503581526.mount: Deactivated successfully. Jun 21 05:05:25.345746 containerd[1565]: time="2025-06-21T05:05:25.345680509Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:05:25.346804 containerd[1565]: time="2025-06-21T05:05:25.346760304Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.1: active requests=0, bytes read=156518913" Jun 21 05:05:25.348063 containerd[1565]: time="2025-06-21T05:05:25.348034084Z" level=info msg="ImageCreate event name:\"sha256:9ac26af2ca9c35e475f921a9bcf40c7c0ce106819208883b006e64c489251722\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:05:25.349949 containerd[1565]: time="2025-06-21T05:05:25.349912374Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:8da6d025e5cf2ff5080c801ac8611bedb513e5922500fcc8161d8164e4679597\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:05:25.350465 containerd[1565]: time="2025-06-21T05:05:25.350411815Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.1\" with image id \"sha256:9ac26af2ca9c35e475f921a9bcf40c7c0ce106819208883b006e64c489251722\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:8da6d025e5cf2ff5080c801ac8611bedb513e5922500fcc8161d8164e4679597\", size \"156518775\" in 9.02553891s" Jun 21 05:05:25.350519 containerd[1565]: time="2025-06-21T05:05:25.350466839Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.1\" returns image reference \"sha256:9ac26af2ca9c35e475f921a9bcf40c7c0ce106819208883b006e64c489251722\"" Jun 21 05:05:25.360649 containerd[1565]: time="2025-06-21T05:05:25.360609293Z" level=info msg="CreateContainer within sandbox \"b763aeb1e60a26488ba33867d5167782ac9d4a473f04ca9c2efebe6d059f1dc8\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 21 05:05:25.390987 containerd[1565]: time="2025-06-21T05:05:25.390860853Z" level=info msg="Container 79288db2e582815ada5e65364c8e172776a392db1ff1e5de0f53e5ae721e5490: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:05:25.401250 containerd[1565]: time="2025-06-21T05:05:25.401205317Z" level=info msg="CreateContainer within sandbox \"b763aeb1e60a26488ba33867d5167782ac9d4a473f04ca9c2efebe6d059f1dc8\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"79288db2e582815ada5e65364c8e172776a392db1ff1e5de0f53e5ae721e5490\"" Jun 21 05:05:25.407403 containerd[1565]: time="2025-06-21T05:05:25.407348915Z" level=info msg="StartContainer for \"79288db2e582815ada5e65364c8e172776a392db1ff1e5de0f53e5ae721e5490\"" Jun 21 05:05:25.408921 containerd[1565]: time="2025-06-21T05:05:25.408891522Z" level=info msg="connecting to shim 79288db2e582815ada5e65364c8e172776a392db1ff1e5de0f53e5ae721e5490" address="unix:///run/containerd/s/4ddfcd1baf73d156b044fc681cd328351efccf44098273de847bc140fbb885cb" protocol=ttrpc version=3 Jun 21 05:05:25.432734 systemd[1]: Started cri-containerd-79288db2e582815ada5e65364c8e172776a392db1ff1e5de0f53e5ae721e5490.scope - libcontainer container 79288db2e582815ada5e65364c8e172776a392db1ff1e5de0f53e5ae721e5490. Jun 21 05:05:25.476919 containerd[1565]: time="2025-06-21T05:05:25.476871373Z" level=info msg="StartContainer for \"79288db2e582815ada5e65364c8e172776a392db1ff1e5de0f53e5ae721e5490\" returns successfully" Jun 21 05:05:25.550936 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 21 05:05:25.551682 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 21 05:05:25.676676 kubelet[2681]: I0621 05:05:25.676221 2681 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c4840fe-1697-4038-8594-535953a4cc31-whisker-ca-bundle\") pod \"6c4840fe-1697-4038-8594-535953a4cc31\" (UID: \"6c4840fe-1697-4038-8594-535953a4cc31\") " Jun 21 05:05:25.676676 kubelet[2681]: I0621 05:05:25.676304 2681 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6c4840fe-1697-4038-8594-535953a4cc31-whisker-backend-key-pair\") pod \"6c4840fe-1697-4038-8594-535953a4cc31\" (UID: \"6c4840fe-1697-4038-8594-535953a4cc31\") " Jun 21 05:05:25.676676 kubelet[2681]: I0621 05:05:25.676327 2681 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gv4h9\" (UniqueName: \"kubernetes.io/projected/6c4840fe-1697-4038-8594-535953a4cc31-kube-api-access-gv4h9\") pod \"6c4840fe-1697-4038-8594-535953a4cc31\" (UID: \"6c4840fe-1697-4038-8594-535953a4cc31\") " Jun 21 05:05:25.677237 kubelet[2681]: I0621 05:05:25.677169 2681 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c4840fe-1697-4038-8594-535953a4cc31-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "6c4840fe-1697-4038-8594-535953a4cc31" (UID: "6c4840fe-1697-4038-8594-535953a4cc31"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 21 05:05:25.680206 kubelet[2681]: I0621 05:05:25.680158 2681 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c4840fe-1697-4038-8594-535953a4cc31-kube-api-access-gv4h9" (OuterVolumeSpecName: "kube-api-access-gv4h9") pod "6c4840fe-1697-4038-8594-535953a4cc31" (UID: "6c4840fe-1697-4038-8594-535953a4cc31"). InnerVolumeSpecName "kube-api-access-gv4h9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 21 05:05:25.681099 kubelet[2681]: I0621 05:05:25.681043 2681 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c4840fe-1697-4038-8594-535953a4cc31-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "6c4840fe-1697-4038-8594-535953a4cc31" (UID: "6c4840fe-1697-4038-8594-535953a4cc31"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jun 21 05:05:25.681595 systemd[1]: var-lib-kubelet-pods-6c4840fe\x2d1697\x2d4038\x2d8594\x2d535953a4cc31-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgv4h9.mount: Deactivated successfully. Jun 21 05:05:25.684171 systemd[1]: var-lib-kubelet-pods-6c4840fe\x2d1697\x2d4038\x2d8594\x2d535953a4cc31-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jun 21 05:05:25.777020 kubelet[2681]: I0621 05:05:25.776963 2681 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c4840fe-1697-4038-8594-535953a4cc31-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jun 21 05:05:25.777020 kubelet[2681]: I0621 05:05:25.777005 2681 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6c4840fe-1697-4038-8594-535953a4cc31-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jun 21 05:05:25.777020 kubelet[2681]: I0621 05:05:25.777016 2681 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gv4h9\" (UniqueName: \"kubernetes.io/projected/6c4840fe-1697-4038-8594-535953a4cc31-kube-api-access-gv4h9\") on node \"localhost\" DevicePath \"\"" Jun 21 05:05:26.367918 systemd[1]: Removed slice kubepods-besteffort-pod6c4840fe_1697_4038_8594_535953a4cc31.slice - libcontainer container kubepods-besteffort-pod6c4840fe_1697_4038_8594_535953a4cc31.slice. Jun 21 05:05:26.378062 kubelet[2681]: I0621 05:05:26.377973 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-qr2vh" podStartSLOduration=2.549514956 podStartE2EDuration="20.377939792s" podCreationTimestamp="2025-06-21 05:05:06 +0000 UTC" firstStartedPulling="2025-06-21 05:05:07.522960955 +0000 UTC m=+16.566522560" lastFinishedPulling="2025-06-21 05:05:25.351385791 +0000 UTC m=+34.394947396" observedRunningTime="2025-06-21 05:05:26.377253769 +0000 UTC m=+35.420815374" watchObservedRunningTime="2025-06-21 05:05:26.377939792 +0000 UTC m=+35.421501417" Jun 21 05:05:26.432209 systemd[1]: Created slice kubepods-besteffort-pod7a37c974_b3ca_4dd7_a25d_838f778cd7f5.slice - libcontainer container kubepods-besteffort-pod7a37c974_b3ca_4dd7_a25d_838f778cd7f5.slice. Jun 21 05:05:26.481302 kubelet[2681]: I0621 05:05:26.481233 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7a37c974-b3ca-4dd7-a25d-838f778cd7f5-whisker-backend-key-pair\") pod \"whisker-76447d75c6-87bqz\" (UID: \"7a37c974-b3ca-4dd7-a25d-838f778cd7f5\") " pod="calico-system/whisker-76447d75c6-87bqz" Jun 21 05:05:26.481302 kubelet[2681]: I0621 05:05:26.481268 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4589\" (UniqueName: \"kubernetes.io/projected/7a37c974-b3ca-4dd7-a25d-838f778cd7f5-kube-api-access-g4589\") pod \"whisker-76447d75c6-87bqz\" (UID: \"7a37c974-b3ca-4dd7-a25d-838f778cd7f5\") " pod="calico-system/whisker-76447d75c6-87bqz" Jun 21 05:05:26.481770 kubelet[2681]: I0621 05:05:26.481345 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a37c974-b3ca-4dd7-a25d-838f778cd7f5-whisker-ca-bundle\") pod \"whisker-76447d75c6-87bqz\" (UID: \"7a37c974-b3ca-4dd7-a25d-838f778cd7f5\") " pod="calico-system/whisker-76447d75c6-87bqz" Jun 21 05:05:26.737011 containerd[1565]: time="2025-06-21T05:05:26.736940932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76447d75c6-87bqz,Uid:7a37c974-b3ca-4dd7-a25d-838f778cd7f5,Namespace:calico-system,Attempt:0,}" Jun 21 05:05:26.928951 systemd-networkd[1461]: cali889d3953979: Link UP Jun 21 05:05:26.929234 systemd-networkd[1461]: cali889d3953979: Gained carrier Jun 21 05:05:26.957059 containerd[1565]: 2025-06-21 05:05:26.763 [INFO][3803] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 21 05:05:26.957059 containerd[1565]: 2025-06-21 05:05:26.780 [INFO][3803] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--76447d75c6--87bqz-eth0 whisker-76447d75c6- calico-system 7a37c974-b3ca-4dd7-a25d-838f778cd7f5 933 0 2025-06-21 05:05:26 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:76447d75c6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-76447d75c6-87bqz eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali889d3953979 [] [] }} ContainerID="0cff4da261e4e7c833fa2e17cfede18780f471878478f524dd204849138813e8" Namespace="calico-system" Pod="whisker-76447d75c6-87bqz" WorkloadEndpoint="localhost-k8s-whisker--76447d75c6--87bqz-" Jun 21 05:05:26.957059 containerd[1565]: 2025-06-21 05:05:26.780 [INFO][3803] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0cff4da261e4e7c833fa2e17cfede18780f471878478f524dd204849138813e8" Namespace="calico-system" Pod="whisker-76447d75c6-87bqz" WorkloadEndpoint="localhost-k8s-whisker--76447d75c6--87bqz-eth0" Jun 21 05:05:26.957059 containerd[1565]: 2025-06-21 05:05:26.854 [INFO][3816] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0cff4da261e4e7c833fa2e17cfede18780f471878478f524dd204849138813e8" HandleID="k8s-pod-network.0cff4da261e4e7c833fa2e17cfede18780f471878478f524dd204849138813e8" Workload="localhost-k8s-whisker--76447d75c6--87bqz-eth0" Jun 21 05:05:26.957358 containerd[1565]: 2025-06-21 05:05:26.855 [INFO][3816] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0cff4da261e4e7c833fa2e17cfede18780f471878478f524dd204849138813e8" HandleID="k8s-pod-network.0cff4da261e4e7c833fa2e17cfede18780f471878478f524dd204849138813e8" Workload="localhost-k8s-whisker--76447d75c6--87bqz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003af5d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-76447d75c6-87bqz", "timestamp":"2025-06-21 05:05:26.854419784 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 05:05:26.957358 containerd[1565]: 2025-06-21 05:05:26.855 [INFO][3816] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 05:05:26.957358 containerd[1565]: 2025-06-21 05:05:26.855 [INFO][3816] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 05:05:26.957358 containerd[1565]: 2025-06-21 05:05:26.855 [INFO][3816] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 21 05:05:26.957358 containerd[1565]: 2025-06-21 05:05:26.869 [INFO][3816] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0cff4da261e4e7c833fa2e17cfede18780f471878478f524dd204849138813e8" host="localhost" Jun 21 05:05:26.957358 containerd[1565]: 2025-06-21 05:05:26.877 [INFO][3816] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 21 05:05:26.957358 containerd[1565]: 2025-06-21 05:05:26.884 [INFO][3816] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 21 05:05:26.957358 containerd[1565]: 2025-06-21 05:05:26.886 [INFO][3816] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 21 05:05:26.957358 containerd[1565]: 2025-06-21 05:05:26.888 [INFO][3816] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 21 05:05:26.957358 containerd[1565]: 2025-06-21 05:05:26.889 [INFO][3816] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0cff4da261e4e7c833fa2e17cfede18780f471878478f524dd204849138813e8" host="localhost" Jun 21 05:05:26.957996 containerd[1565]: 2025-06-21 05:05:26.890 [INFO][3816] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0cff4da261e4e7c833fa2e17cfede18780f471878478f524dd204849138813e8 Jun 21 05:05:26.957996 containerd[1565]: 2025-06-21 05:05:26.897 [INFO][3816] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0cff4da261e4e7c833fa2e17cfede18780f471878478f524dd204849138813e8" host="localhost" Jun 21 05:05:26.957996 containerd[1565]: 2025-06-21 05:05:26.906 [INFO][3816] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.0cff4da261e4e7c833fa2e17cfede18780f471878478f524dd204849138813e8" host="localhost" Jun 21 05:05:26.957996 containerd[1565]: 2025-06-21 05:05:26.907 [INFO][3816] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.0cff4da261e4e7c833fa2e17cfede18780f471878478f524dd204849138813e8" host="localhost" Jun 21 05:05:26.957996 containerd[1565]: 2025-06-21 05:05:26.907 [INFO][3816] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 05:05:26.957996 containerd[1565]: 2025-06-21 05:05:26.907 [INFO][3816] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="0cff4da261e4e7c833fa2e17cfede18780f471878478f524dd204849138813e8" HandleID="k8s-pod-network.0cff4da261e4e7c833fa2e17cfede18780f471878478f524dd204849138813e8" Workload="localhost-k8s-whisker--76447d75c6--87bqz-eth0" Jun 21 05:05:26.958421 containerd[1565]: 2025-06-21 05:05:26.915 [INFO][3803] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0cff4da261e4e7c833fa2e17cfede18780f471878478f524dd204849138813e8" Namespace="calico-system" Pod="whisker-76447d75c6-87bqz" WorkloadEndpoint="localhost-k8s-whisker--76447d75c6--87bqz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--76447d75c6--87bqz-eth0", GenerateName:"whisker-76447d75c6-", Namespace:"calico-system", SelfLink:"", UID:"7a37c974-b3ca-4dd7-a25d-838f778cd7f5", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 5, 5, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"76447d75c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-76447d75c6-87bqz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali889d3953979", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 05:05:26.958421 containerd[1565]: 2025-06-21 05:05:26.916 [INFO][3803] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="0cff4da261e4e7c833fa2e17cfede18780f471878478f524dd204849138813e8" Namespace="calico-system" Pod="whisker-76447d75c6-87bqz" WorkloadEndpoint="localhost-k8s-whisker--76447d75c6--87bqz-eth0" Jun 21 05:05:26.958516 containerd[1565]: 2025-06-21 05:05:26.916 [INFO][3803] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali889d3953979 ContainerID="0cff4da261e4e7c833fa2e17cfede18780f471878478f524dd204849138813e8" Namespace="calico-system" Pod="whisker-76447d75c6-87bqz" WorkloadEndpoint="localhost-k8s-whisker--76447d75c6--87bqz-eth0" Jun 21 05:05:26.958516 containerd[1565]: 2025-06-21 05:05:26.938 [INFO][3803] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0cff4da261e4e7c833fa2e17cfede18780f471878478f524dd204849138813e8" Namespace="calico-system" Pod="whisker-76447d75c6-87bqz" WorkloadEndpoint="localhost-k8s-whisker--76447d75c6--87bqz-eth0" Jun 21 05:05:26.958566 containerd[1565]: 2025-06-21 05:05:26.938 [INFO][3803] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0cff4da261e4e7c833fa2e17cfede18780f471878478f524dd204849138813e8" Namespace="calico-system" Pod="whisker-76447d75c6-87bqz" WorkloadEndpoint="localhost-k8s-whisker--76447d75c6--87bqz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--76447d75c6--87bqz-eth0", GenerateName:"whisker-76447d75c6-", Namespace:"calico-system", SelfLink:"", UID:"7a37c974-b3ca-4dd7-a25d-838f778cd7f5", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 5, 5, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"76447d75c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0cff4da261e4e7c833fa2e17cfede18780f471878478f524dd204849138813e8", Pod:"whisker-76447d75c6-87bqz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali889d3953979", MAC:"92:5d:fe:99:6a:96", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 05:05:26.958623 containerd[1565]: 2025-06-21 05:05:26.951 [INFO][3803] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0cff4da261e4e7c833fa2e17cfede18780f471878478f524dd204849138813e8" Namespace="calico-system" Pod="whisker-76447d75c6-87bqz" WorkloadEndpoint="localhost-k8s-whisker--76447d75c6--87bqz-eth0" Jun 21 05:05:27.244956 containerd[1565]: time="2025-06-21T05:05:27.244762571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b855447fc-sd4n7,Uid:3e2bfcd5-cea0-471e-a938-55011eaffd6d,Namespace:calico-apiserver,Attempt:0,}" Jun 21 05:05:27.244956 containerd[1565]: time="2025-06-21T05:05:27.244932973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-z8jd5,Uid:55148328-07b8-4d25-90b2-fa374be29f23,Namespace:calico-system,Attempt:0,}" Jun 21 05:05:27.363194 kubelet[2681]: I0621 05:05:27.363130 2681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 21 05:05:27.363909 kubelet[2681]: I0621 05:05:27.363446 2681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c4840fe-1697-4038-8594-535953a4cc31" path="/var/lib/kubelet/pods/6c4840fe-1697-4038-8594-535953a4cc31/volumes" Jun 21 05:05:27.420993 containerd[1565]: time="2025-06-21T05:05:27.420933262Z" level=info msg="connecting to shim 0cff4da261e4e7c833fa2e17cfede18780f471878478f524dd204849138813e8" address="unix:///run/containerd/s/1d0fb60c50caeaa313770ba0343b948e5fbe9ea10c547ae26617faa85608980b" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:05:27.451705 systemd[1]: Started cri-containerd-0cff4da261e4e7c833fa2e17cfede18780f471878478f524dd204849138813e8.scope - libcontainer container 0cff4da261e4e7c833fa2e17cfede18780f471878478f524dd204849138813e8. Jun 21 05:05:27.470118 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 21 05:05:27.490830 systemd-networkd[1461]: cali94a50a26be2: Link UP Jun 21 05:05:27.491471 systemd-networkd[1461]: cali94a50a26be2: Gained carrier Jun 21 05:05:27.504762 containerd[1565]: time="2025-06-21T05:05:27.504642389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76447d75c6-87bqz,Uid:7a37c974-b3ca-4dd7-a25d-838f778cd7f5,Namespace:calico-system,Attempt:0,} returns sandbox id \"0cff4da261e4e7c833fa2e17cfede18780f471878478f524dd204849138813e8\"" Jun 21 05:05:27.505954 containerd[1565]: time="2025-06-21T05:05:27.505932588Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.1\"" Jun 21 05:05:27.506926 containerd[1565]: 2025-06-21 05:05:27.397 [INFO][3934] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 21 05:05:27.506926 containerd[1565]: 2025-06-21 05:05:27.411 [INFO][3934] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--b855447fc--sd4n7-eth0 calico-apiserver-b855447fc- calico-apiserver 3e2bfcd5-cea0-471e-a938-55011eaffd6d 858 0 2025-06-21 05:05:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:b855447fc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-b855447fc-sd4n7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali94a50a26be2 [] [] }} ContainerID="b59c7870e8fc1406ce01c25ccc9b8879f6270f276e53289fb2b69c497943bcbc" Namespace="calico-apiserver" Pod="calico-apiserver-b855447fc-sd4n7" WorkloadEndpoint="localhost-k8s-calico--apiserver--b855447fc--sd4n7-" Jun 21 05:05:27.506926 containerd[1565]: 2025-06-21 05:05:27.411 [INFO][3934] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b59c7870e8fc1406ce01c25ccc9b8879f6270f276e53289fb2b69c497943bcbc" Namespace="calico-apiserver" Pod="calico-apiserver-b855447fc-sd4n7" WorkloadEndpoint="localhost-k8s-calico--apiserver--b855447fc--sd4n7-eth0" Jun 21 05:05:27.506926 containerd[1565]: 2025-06-21 05:05:27.445 [INFO][3973] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b59c7870e8fc1406ce01c25ccc9b8879f6270f276e53289fb2b69c497943bcbc" HandleID="k8s-pod-network.b59c7870e8fc1406ce01c25ccc9b8879f6270f276e53289fb2b69c497943bcbc" Workload="localhost-k8s-calico--apiserver--b855447fc--sd4n7-eth0" Jun 21 05:05:27.507112 containerd[1565]: 2025-06-21 05:05:27.446 [INFO][3973] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b59c7870e8fc1406ce01c25ccc9b8879f6270f276e53289fb2b69c497943bcbc" HandleID="k8s-pod-network.b59c7870e8fc1406ce01c25ccc9b8879f6270f276e53289fb2b69c497943bcbc" Workload="localhost-k8s-calico--apiserver--b855447fc--sd4n7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034d5f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-b855447fc-sd4n7", "timestamp":"2025-06-21 05:05:27.445823582 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 05:05:27.507112 containerd[1565]: 2025-06-21 05:05:27.446 [INFO][3973] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 05:05:27.507112 containerd[1565]: 2025-06-21 05:05:27.446 [INFO][3973] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 05:05:27.507112 containerd[1565]: 2025-06-21 05:05:27.446 [INFO][3973] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 21 05:05:27.507112 containerd[1565]: 2025-06-21 05:05:27.452 [INFO][3973] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b59c7870e8fc1406ce01c25ccc9b8879f6270f276e53289fb2b69c497943bcbc" host="localhost" Jun 21 05:05:27.507112 containerd[1565]: 2025-06-21 05:05:27.457 [INFO][3973] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 21 05:05:27.507112 containerd[1565]: 2025-06-21 05:05:27.464 [INFO][3973] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 21 05:05:27.507112 containerd[1565]: 2025-06-21 05:05:27.466 [INFO][3973] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 21 05:05:27.507112 containerd[1565]: 2025-06-21 05:05:27.468 [INFO][3973] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 21 05:05:27.507112 containerd[1565]: 2025-06-21 05:05:27.468 [INFO][3973] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b59c7870e8fc1406ce01c25ccc9b8879f6270f276e53289fb2b69c497943bcbc" host="localhost" Jun 21 05:05:27.507346 containerd[1565]: 2025-06-21 05:05:27.469 [INFO][3973] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b59c7870e8fc1406ce01c25ccc9b8879f6270f276e53289fb2b69c497943bcbc Jun 21 05:05:27.507346 containerd[1565]: 2025-06-21 05:05:27.474 [INFO][3973] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b59c7870e8fc1406ce01c25ccc9b8879f6270f276e53289fb2b69c497943bcbc" host="localhost" Jun 21 05:05:27.507346 containerd[1565]: 2025-06-21 05:05:27.482 [INFO][3973] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.b59c7870e8fc1406ce01c25ccc9b8879f6270f276e53289fb2b69c497943bcbc" host="localhost" Jun 21 05:05:27.507346 containerd[1565]: 2025-06-21 05:05:27.482 [INFO][3973] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.b59c7870e8fc1406ce01c25ccc9b8879f6270f276e53289fb2b69c497943bcbc" host="localhost" Jun 21 05:05:27.507346 containerd[1565]: 2025-06-21 05:05:27.482 [INFO][3973] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 05:05:27.507346 containerd[1565]: 2025-06-21 05:05:27.482 [INFO][3973] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="b59c7870e8fc1406ce01c25ccc9b8879f6270f276e53289fb2b69c497943bcbc" HandleID="k8s-pod-network.b59c7870e8fc1406ce01c25ccc9b8879f6270f276e53289fb2b69c497943bcbc" Workload="localhost-k8s-calico--apiserver--b855447fc--sd4n7-eth0" Jun 21 05:05:27.507739 containerd[1565]: 2025-06-21 05:05:27.486 [INFO][3934] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b59c7870e8fc1406ce01c25ccc9b8879f6270f276e53289fb2b69c497943bcbc" Namespace="calico-apiserver" Pod="calico-apiserver-b855447fc-sd4n7" WorkloadEndpoint="localhost-k8s-calico--apiserver--b855447fc--sd4n7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b855447fc--sd4n7-eth0", GenerateName:"calico-apiserver-b855447fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"3e2bfcd5-cea0-471e-a938-55011eaffd6d", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 5, 5, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b855447fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-b855447fc-sd4n7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali94a50a26be2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 05:05:27.507817 containerd[1565]: 2025-06-21 05:05:27.486 [INFO][3934] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="b59c7870e8fc1406ce01c25ccc9b8879f6270f276e53289fb2b69c497943bcbc" Namespace="calico-apiserver" Pod="calico-apiserver-b855447fc-sd4n7" WorkloadEndpoint="localhost-k8s-calico--apiserver--b855447fc--sd4n7-eth0" Jun 21 05:05:27.507817 containerd[1565]: 2025-06-21 05:05:27.486 [INFO][3934] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali94a50a26be2 ContainerID="b59c7870e8fc1406ce01c25ccc9b8879f6270f276e53289fb2b69c497943bcbc" Namespace="calico-apiserver" Pod="calico-apiserver-b855447fc-sd4n7" WorkloadEndpoint="localhost-k8s-calico--apiserver--b855447fc--sd4n7-eth0" Jun 21 05:05:27.507817 containerd[1565]: 2025-06-21 05:05:27.492 [INFO][3934] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b59c7870e8fc1406ce01c25ccc9b8879f6270f276e53289fb2b69c497943bcbc" Namespace="calico-apiserver" Pod="calico-apiserver-b855447fc-sd4n7" WorkloadEndpoint="localhost-k8s-calico--apiserver--b855447fc--sd4n7-eth0" Jun 21 05:05:27.507916 containerd[1565]: 2025-06-21 05:05:27.493 [INFO][3934] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b59c7870e8fc1406ce01c25ccc9b8879f6270f276e53289fb2b69c497943bcbc" Namespace="calico-apiserver" Pod="calico-apiserver-b855447fc-sd4n7" WorkloadEndpoint="localhost-k8s-calico--apiserver--b855447fc--sd4n7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b855447fc--sd4n7-eth0", GenerateName:"calico-apiserver-b855447fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"3e2bfcd5-cea0-471e-a938-55011eaffd6d", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 5, 5, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b855447fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b59c7870e8fc1406ce01c25ccc9b8879f6270f276e53289fb2b69c497943bcbc", Pod:"calico-apiserver-b855447fc-sd4n7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali94a50a26be2", MAC:"02:90:7d:8a:4f:f6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 05:05:27.507978 containerd[1565]: 2025-06-21 05:05:27.503 [INFO][3934] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b59c7870e8fc1406ce01c25ccc9b8879f6270f276e53289fb2b69c497943bcbc" Namespace="calico-apiserver" Pod="calico-apiserver-b855447fc-sd4n7" WorkloadEndpoint="localhost-k8s-calico--apiserver--b855447fc--sd4n7-eth0" Jun 21 05:05:27.531435 containerd[1565]: time="2025-06-21T05:05:27.531380841Z" level=info msg="connecting to shim b59c7870e8fc1406ce01c25ccc9b8879f6270f276e53289fb2b69c497943bcbc" address="unix:///run/containerd/s/2466a9a82c3bd1e0724a481c052da771f974558397ec85c1f4734d3a774de085" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:05:27.562776 systemd[1]: Started cri-containerd-b59c7870e8fc1406ce01c25ccc9b8879f6270f276e53289fb2b69c497943bcbc.scope - libcontainer container b59c7870e8fc1406ce01c25ccc9b8879f6270f276e53289fb2b69c497943bcbc. Jun 21 05:05:27.580667 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 21 05:05:27.586413 systemd-networkd[1461]: calic63ff64e4e7: Link UP Jun 21 05:05:27.587416 systemd-networkd[1461]: calic63ff64e4e7: Gained carrier Jun 21 05:05:27.603076 containerd[1565]: 2025-06-21 05:05:27.399 [INFO][3943] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 21 05:05:27.603076 containerd[1565]: 2025-06-21 05:05:27.413 [INFO][3943] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--5bd85449d4--z8jd5-eth0 goldmane-5bd85449d4- calico-system 55148328-07b8-4d25-90b2-fa374be29f23 859 0 2025-06-21 05:05:05 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5bd85449d4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-5bd85449d4-z8jd5 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic63ff64e4e7 [] [] }} ContainerID="d931ce645cd8ad4f3df01fd5a55091e2cac65b2c84413c322a92791b596c03a6" Namespace="calico-system" Pod="goldmane-5bd85449d4-z8jd5" WorkloadEndpoint="localhost-k8s-goldmane--5bd85449d4--z8jd5-" Jun 21 05:05:27.603076 containerd[1565]: 2025-06-21 05:05:27.413 [INFO][3943] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d931ce645cd8ad4f3df01fd5a55091e2cac65b2c84413c322a92791b596c03a6" Namespace="calico-system" Pod="goldmane-5bd85449d4-z8jd5" WorkloadEndpoint="localhost-k8s-goldmane--5bd85449d4--z8jd5-eth0" Jun 21 05:05:27.603076 containerd[1565]: 2025-06-21 05:05:27.447 [INFO][3966] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d931ce645cd8ad4f3df01fd5a55091e2cac65b2c84413c322a92791b596c03a6" HandleID="k8s-pod-network.d931ce645cd8ad4f3df01fd5a55091e2cac65b2c84413c322a92791b596c03a6" Workload="localhost-k8s-goldmane--5bd85449d4--z8jd5-eth0" Jun 21 05:05:27.603291 containerd[1565]: 2025-06-21 05:05:27.447 [INFO][3966] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d931ce645cd8ad4f3df01fd5a55091e2cac65b2c84413c322a92791b596c03a6" HandleID="k8s-pod-network.d931ce645cd8ad4f3df01fd5a55091e2cac65b2c84413c322a92791b596c03a6" Workload="localhost-k8s-goldmane--5bd85449d4--z8jd5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003555f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-5bd85449d4-z8jd5", "timestamp":"2025-06-21 05:05:27.447616099 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 05:05:27.603291 containerd[1565]: 2025-06-21 05:05:27.447 [INFO][3966] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 05:05:27.603291 containerd[1565]: 2025-06-21 05:05:27.482 [INFO][3966] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 05:05:27.603291 containerd[1565]: 2025-06-21 05:05:27.482 [INFO][3966] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 21 05:05:27.603291 containerd[1565]: 2025-06-21 05:05:27.553 [INFO][3966] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d931ce645cd8ad4f3df01fd5a55091e2cac65b2c84413c322a92791b596c03a6" host="localhost" Jun 21 05:05:27.603291 containerd[1565]: 2025-06-21 05:05:27.560 [INFO][3966] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 21 05:05:27.603291 containerd[1565]: 2025-06-21 05:05:27.564 [INFO][3966] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 21 05:05:27.603291 containerd[1565]: 2025-06-21 05:05:27.566 [INFO][3966] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 21 05:05:27.603291 containerd[1565]: 2025-06-21 05:05:27.568 [INFO][3966] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 21 05:05:27.603291 containerd[1565]: 2025-06-21 05:05:27.568 [INFO][3966] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d931ce645cd8ad4f3df01fd5a55091e2cac65b2c84413c322a92791b596c03a6" host="localhost" Jun 21 05:05:27.603611 containerd[1565]: 2025-06-21 05:05:27.570 [INFO][3966] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d931ce645cd8ad4f3df01fd5a55091e2cac65b2c84413c322a92791b596c03a6 Jun 21 05:05:27.603611 containerd[1565]: 2025-06-21 05:05:27.574 [INFO][3966] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d931ce645cd8ad4f3df01fd5a55091e2cac65b2c84413c322a92791b596c03a6" host="localhost" Jun 21 05:05:27.603611 containerd[1565]: 2025-06-21 05:05:27.580 [INFO][3966] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.d931ce645cd8ad4f3df01fd5a55091e2cac65b2c84413c322a92791b596c03a6" host="localhost" Jun 21 05:05:27.603611 containerd[1565]: 2025-06-21 05:05:27.580 [INFO][3966] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.d931ce645cd8ad4f3df01fd5a55091e2cac65b2c84413c322a92791b596c03a6" host="localhost" Jun 21 05:05:27.603611 containerd[1565]: 2025-06-21 05:05:27.580 [INFO][3966] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 05:05:27.603611 containerd[1565]: 2025-06-21 05:05:27.580 [INFO][3966] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="d931ce645cd8ad4f3df01fd5a55091e2cac65b2c84413c322a92791b596c03a6" HandleID="k8s-pod-network.d931ce645cd8ad4f3df01fd5a55091e2cac65b2c84413c322a92791b596c03a6" Workload="localhost-k8s-goldmane--5bd85449d4--z8jd5-eth0" Jun 21 05:05:27.603817 containerd[1565]: 2025-06-21 05:05:27.583 [INFO][3943] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d931ce645cd8ad4f3df01fd5a55091e2cac65b2c84413c322a92791b596c03a6" Namespace="calico-system" Pod="goldmane-5bd85449d4-z8jd5" WorkloadEndpoint="localhost-k8s-goldmane--5bd85449d4--z8jd5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5bd85449d4--z8jd5-eth0", GenerateName:"goldmane-5bd85449d4-", Namespace:"calico-system", SelfLink:"", UID:"55148328-07b8-4d25-90b2-fa374be29f23", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 5, 5, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5bd85449d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-5bd85449d4-z8jd5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic63ff64e4e7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 05:05:27.603817 containerd[1565]: 2025-06-21 05:05:27.584 [INFO][3943] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="d931ce645cd8ad4f3df01fd5a55091e2cac65b2c84413c322a92791b596c03a6" Namespace="calico-system" Pod="goldmane-5bd85449d4-z8jd5" WorkloadEndpoint="localhost-k8s-goldmane--5bd85449d4--z8jd5-eth0" Jun 21 05:05:27.603945 containerd[1565]: 2025-06-21 05:05:27.584 [INFO][3943] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic63ff64e4e7 ContainerID="d931ce645cd8ad4f3df01fd5a55091e2cac65b2c84413c322a92791b596c03a6" Namespace="calico-system" Pod="goldmane-5bd85449d4-z8jd5" WorkloadEndpoint="localhost-k8s-goldmane--5bd85449d4--z8jd5-eth0" Jun 21 05:05:27.603945 containerd[1565]: 2025-06-21 05:05:27.588 [INFO][3943] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d931ce645cd8ad4f3df01fd5a55091e2cac65b2c84413c322a92791b596c03a6" Namespace="calico-system" Pod="goldmane-5bd85449d4-z8jd5" WorkloadEndpoint="localhost-k8s-goldmane--5bd85449d4--z8jd5-eth0" Jun 21 05:05:27.604010 containerd[1565]: 2025-06-21 05:05:27.588 [INFO][3943] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d931ce645cd8ad4f3df01fd5a55091e2cac65b2c84413c322a92791b596c03a6" Namespace="calico-system" Pod="goldmane-5bd85449d4-z8jd5" WorkloadEndpoint="localhost-k8s-goldmane--5bd85449d4--z8jd5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5bd85449d4--z8jd5-eth0", GenerateName:"goldmane-5bd85449d4-", Namespace:"calico-system", SelfLink:"", UID:"55148328-07b8-4d25-90b2-fa374be29f23", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 5, 5, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5bd85449d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d931ce645cd8ad4f3df01fd5a55091e2cac65b2c84413c322a92791b596c03a6", Pod:"goldmane-5bd85449d4-z8jd5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic63ff64e4e7", MAC:"32:01:00:05:08:c7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 05:05:27.604095 containerd[1565]: 2025-06-21 05:05:27.598 [INFO][3943] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d931ce645cd8ad4f3df01fd5a55091e2cac65b2c84413c322a92791b596c03a6" Namespace="calico-system" Pod="goldmane-5bd85449d4-z8jd5" WorkloadEndpoint="localhost-k8s-goldmane--5bd85449d4--z8jd5-eth0" Jun 21 05:05:27.619686 containerd[1565]: time="2025-06-21T05:05:27.619640158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b855447fc-sd4n7,Uid:3e2bfcd5-cea0-471e-a938-55011eaffd6d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b59c7870e8fc1406ce01c25ccc9b8879f6270f276e53289fb2b69c497943bcbc\"" Jun 21 05:05:27.629633 containerd[1565]: time="2025-06-21T05:05:27.629579809Z" level=info msg="connecting to shim d931ce645cd8ad4f3df01fd5a55091e2cac65b2c84413c322a92791b596c03a6" address="unix:///run/containerd/s/4e4f21b8d6cd611c8e5361f61846cc77be4003e513200154ae2a945ead21d29b" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:05:27.663617 systemd[1]: Started cri-containerd-d931ce645cd8ad4f3df01fd5a55091e2cac65b2c84413c322a92791b596c03a6.scope - libcontainer container d931ce645cd8ad4f3df01fd5a55091e2cac65b2c84413c322a92791b596c03a6. Jun 21 05:05:27.676026 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 21 05:05:27.705472 containerd[1565]: time="2025-06-21T05:05:27.705434403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-z8jd5,Uid:55148328-07b8-4d25-90b2-fa374be29f23,Namespace:calico-system,Attempt:0,} returns sandbox id \"d931ce645cd8ad4f3df01fd5a55091e2cac65b2c84413c322a92791b596c03a6\"" Jun 21 05:05:28.224647 systemd-networkd[1461]: cali889d3953979: Gained IPv6LL Jun 21 05:05:28.244350 containerd[1565]: time="2025-06-21T05:05:28.244309958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b69c775df-jfqx8,Uid:e769b160-ade4-402a-9f48-f153b80ddcf1,Namespace:calico-system,Attempt:0,}" Jun 21 05:05:28.635151 systemd-networkd[1461]: cali305c7f26478: Link UP Jun 21 05:05:28.635764 systemd-networkd[1461]: cali305c7f26478: Gained carrier Jun 21 05:05:28.649667 containerd[1565]: 2025-06-21 05:05:28.558 [INFO][4150] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 21 05:05:28.649667 containerd[1565]: 2025-06-21 05:05:28.569 [INFO][4150] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7b69c775df--jfqx8-eth0 calico-kube-controllers-7b69c775df- calico-system e769b160-ade4-402a-9f48-f153b80ddcf1 853 0 2025-06-21 05:05:06 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7b69c775df projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7b69c775df-jfqx8 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali305c7f26478 [] [] }} ContainerID="3fbbc95b1a1729f807274c393b25bb663d31ec25aaf38ba7c7008c068df2f604" Namespace="calico-system" Pod="calico-kube-controllers-7b69c775df-jfqx8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b69c775df--jfqx8-" Jun 21 05:05:28.649667 containerd[1565]: 2025-06-21 05:05:28.569 [INFO][4150] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3fbbc95b1a1729f807274c393b25bb663d31ec25aaf38ba7c7008c068df2f604" Namespace="calico-system" Pod="calico-kube-controllers-7b69c775df-jfqx8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b69c775df--jfqx8-eth0" Jun 21 05:05:28.649667 containerd[1565]: 2025-06-21 05:05:28.596 [INFO][4165] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3fbbc95b1a1729f807274c393b25bb663d31ec25aaf38ba7c7008c068df2f604" HandleID="k8s-pod-network.3fbbc95b1a1729f807274c393b25bb663d31ec25aaf38ba7c7008c068df2f604" Workload="localhost-k8s-calico--kube--controllers--7b69c775df--jfqx8-eth0" Jun 21 05:05:28.649974 containerd[1565]: 2025-06-21 05:05:28.596 [INFO][4165] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3fbbc95b1a1729f807274c393b25bb663d31ec25aaf38ba7c7008c068df2f604" HandleID="k8s-pod-network.3fbbc95b1a1729f807274c393b25bb663d31ec25aaf38ba7c7008c068df2f604" Workload="localhost-k8s-calico--kube--controllers--7b69c775df--jfqx8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7180), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7b69c775df-jfqx8", "timestamp":"2025-06-21 05:05:28.596074848 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 05:05:28.649974 containerd[1565]: 2025-06-21 05:05:28.596 [INFO][4165] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 05:05:28.649974 containerd[1565]: 2025-06-21 05:05:28.596 [INFO][4165] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 05:05:28.649974 containerd[1565]: 2025-06-21 05:05:28.596 [INFO][4165] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 21 05:05:28.649974 containerd[1565]: 2025-06-21 05:05:28.602 [INFO][4165] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3fbbc95b1a1729f807274c393b25bb663d31ec25aaf38ba7c7008c068df2f604" host="localhost" Jun 21 05:05:28.649974 containerd[1565]: 2025-06-21 05:05:28.606 [INFO][4165] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 21 05:05:28.649974 containerd[1565]: 2025-06-21 05:05:28.610 [INFO][4165] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 21 05:05:28.649974 containerd[1565]: 2025-06-21 05:05:28.611 [INFO][4165] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 21 05:05:28.649974 containerd[1565]: 2025-06-21 05:05:28.613 [INFO][4165] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 21 05:05:28.649974 containerd[1565]: 2025-06-21 05:05:28.613 [INFO][4165] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3fbbc95b1a1729f807274c393b25bb663d31ec25aaf38ba7c7008c068df2f604" host="localhost" Jun 21 05:05:28.650287 containerd[1565]: 2025-06-21 05:05:28.614 [INFO][4165] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3fbbc95b1a1729f807274c393b25bb663d31ec25aaf38ba7c7008c068df2f604 Jun 21 05:05:28.650287 containerd[1565]: 2025-06-21 05:05:28.622 [INFO][4165] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3fbbc95b1a1729f807274c393b25bb663d31ec25aaf38ba7c7008c068df2f604" host="localhost" Jun 21 05:05:28.650287 containerd[1565]: 2025-06-21 05:05:28.629 [INFO][4165] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.3fbbc95b1a1729f807274c393b25bb663d31ec25aaf38ba7c7008c068df2f604" host="localhost" Jun 21 05:05:28.650287 containerd[1565]: 2025-06-21 05:05:28.629 [INFO][4165] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.3fbbc95b1a1729f807274c393b25bb663d31ec25aaf38ba7c7008c068df2f604" host="localhost" Jun 21 05:05:28.650287 containerd[1565]: 2025-06-21 05:05:28.629 [INFO][4165] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 05:05:28.650287 containerd[1565]: 2025-06-21 05:05:28.630 [INFO][4165] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="3fbbc95b1a1729f807274c393b25bb663d31ec25aaf38ba7c7008c068df2f604" HandleID="k8s-pod-network.3fbbc95b1a1729f807274c393b25bb663d31ec25aaf38ba7c7008c068df2f604" Workload="localhost-k8s-calico--kube--controllers--7b69c775df--jfqx8-eth0" Jun 21 05:05:28.650467 containerd[1565]: 2025-06-21 05:05:28.633 [INFO][4150] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3fbbc95b1a1729f807274c393b25bb663d31ec25aaf38ba7c7008c068df2f604" Namespace="calico-system" Pod="calico-kube-controllers-7b69c775df-jfqx8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b69c775df--jfqx8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7b69c775df--jfqx8-eth0", GenerateName:"calico-kube-controllers-7b69c775df-", Namespace:"calico-system", SelfLink:"", UID:"e769b160-ade4-402a-9f48-f153b80ddcf1", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 5, 5, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b69c775df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7b69c775df-jfqx8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali305c7f26478", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 05:05:28.650624 containerd[1565]: 2025-06-21 05:05:28.633 [INFO][4150] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="3fbbc95b1a1729f807274c393b25bb663d31ec25aaf38ba7c7008c068df2f604" Namespace="calico-system" Pod="calico-kube-controllers-7b69c775df-jfqx8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b69c775df--jfqx8-eth0" Jun 21 05:05:28.650624 containerd[1565]: 2025-06-21 05:05:28.633 [INFO][4150] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali305c7f26478 ContainerID="3fbbc95b1a1729f807274c393b25bb663d31ec25aaf38ba7c7008c068df2f604" Namespace="calico-system" Pod="calico-kube-controllers-7b69c775df-jfqx8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b69c775df--jfqx8-eth0" Jun 21 05:05:28.650624 containerd[1565]: 2025-06-21 05:05:28.635 [INFO][4150] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3fbbc95b1a1729f807274c393b25bb663d31ec25aaf38ba7c7008c068df2f604" Namespace="calico-system" Pod="calico-kube-controllers-7b69c775df-jfqx8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b69c775df--jfqx8-eth0" Jun 21 05:05:28.650686 containerd[1565]: 2025-06-21 05:05:28.635 [INFO][4150] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3fbbc95b1a1729f807274c393b25bb663d31ec25aaf38ba7c7008c068df2f604" Namespace="calico-system" Pod="calico-kube-controllers-7b69c775df-jfqx8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b69c775df--jfqx8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7b69c775df--jfqx8-eth0", GenerateName:"calico-kube-controllers-7b69c775df-", Namespace:"calico-system", SelfLink:"", UID:"e769b160-ade4-402a-9f48-f153b80ddcf1", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 5, 5, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b69c775df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3fbbc95b1a1729f807274c393b25bb663d31ec25aaf38ba7c7008c068df2f604", Pod:"calico-kube-controllers-7b69c775df-jfqx8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali305c7f26478", MAC:"2e:5a:db:36:5a:30", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 05:05:28.650749 containerd[1565]: 2025-06-21 05:05:28.646 [INFO][4150] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3fbbc95b1a1729f807274c393b25bb663d31ec25aaf38ba7c7008c068df2f604" Namespace="calico-system" Pod="calico-kube-controllers-7b69c775df-jfqx8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b69c775df--jfqx8-eth0" Jun 21 05:05:28.680118 containerd[1565]: time="2025-06-21T05:05:28.680049527Z" level=info msg="connecting to shim 3fbbc95b1a1729f807274c393b25bb663d31ec25aaf38ba7c7008c068df2f604" address="unix:///run/containerd/s/82f15d3bae79994b2bd8f74221e53d605498907637f4b53807466c8fba00a2b8" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:05:28.710737 systemd[1]: Started cri-containerd-3fbbc95b1a1729f807274c393b25bb663d31ec25aaf38ba7c7008c068df2f604.scope - libcontainer container 3fbbc95b1a1729f807274c393b25bb663d31ec25aaf38ba7c7008c068df2f604. Jun 21 05:05:28.723287 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 21 05:05:28.754159 containerd[1565]: time="2025-06-21T05:05:28.754107180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b69c775df-jfqx8,Uid:e769b160-ade4-402a-9f48-f153b80ddcf1,Namespace:calico-system,Attempt:0,} returns sandbox id \"3fbbc95b1a1729f807274c393b25bb663d31ec25aaf38ba7c7008c068df2f604\"" Jun 21 05:05:29.056651 systemd-networkd[1461]: calic63ff64e4e7: Gained IPv6LL Jun 21 05:05:29.159133 systemd[1]: Started sshd@7-10.0.0.72:22-10.0.0.1:41364.service - OpenSSH per-connection server daemon (10.0.0.1:41364). Jun 21 05:05:29.214402 sshd[4248]: Accepted publickey for core from 10.0.0.1 port 41364 ssh2: RSA SHA256:UcUMoAuz6+rdewXVNINfGwLYEuDJpooqWrO3V6JQU60 Jun 21 05:05:29.215918 sshd-session[4248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:05:29.220434 systemd-logind[1550]: New session 8 of user core. Jun 21 05:05:29.230617 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 21 05:05:29.244457 containerd[1565]: time="2025-06-21T05:05:29.244411201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gnpdd,Uid:3eee9492-ff67-4a0d-a49e-690cbd0112e0,Namespace:calico-system,Attempt:0,}" Jun 21 05:05:29.352553 systemd-networkd[1461]: caliacb76c3ad0a: Link UP Jun 21 05:05:29.353538 systemd-networkd[1461]: caliacb76c3ad0a: Gained carrier Jun 21 05:05:29.376779 containerd[1565]: 2025-06-21 05:05:29.269 [INFO][4252] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 21 05:05:29.376779 containerd[1565]: 2025-06-21 05:05:29.280 [INFO][4252] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--gnpdd-eth0 csi-node-driver- calico-system 3eee9492-ff67-4a0d-a49e-690cbd0112e0 749 0 2025-06-21 05:05:06 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:85b8c9d4df k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-gnpdd eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliacb76c3ad0a [] [] }} ContainerID="e7347ce46b5aa6917353c8c594cda99f638f025dba898ae0350112807975bc84" Namespace="calico-system" Pod="csi-node-driver-gnpdd" WorkloadEndpoint="localhost-k8s-csi--node--driver--gnpdd-" Jun 21 05:05:29.376779 containerd[1565]: 2025-06-21 05:05:29.281 [INFO][4252] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e7347ce46b5aa6917353c8c594cda99f638f025dba898ae0350112807975bc84" Namespace="calico-system" Pod="csi-node-driver-gnpdd" WorkloadEndpoint="localhost-k8s-csi--node--driver--gnpdd-eth0" Jun 21 05:05:29.376779 containerd[1565]: 2025-06-21 05:05:29.310 [INFO][4271] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e7347ce46b5aa6917353c8c594cda99f638f025dba898ae0350112807975bc84" HandleID="k8s-pod-network.e7347ce46b5aa6917353c8c594cda99f638f025dba898ae0350112807975bc84" Workload="localhost-k8s-csi--node--driver--gnpdd-eth0" Jun 21 05:05:29.376639 systemd-networkd[1461]: cali94a50a26be2: Gained IPv6LL Jun 21 05:05:29.377111 containerd[1565]: 2025-06-21 05:05:29.310 [INFO][4271] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e7347ce46b5aa6917353c8c594cda99f638f025dba898ae0350112807975bc84" HandleID="k8s-pod-network.e7347ce46b5aa6917353c8c594cda99f638f025dba898ae0350112807975bc84" Workload="localhost-k8s-csi--node--driver--gnpdd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325490), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-gnpdd", "timestamp":"2025-06-21 05:05:29.310826776 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 05:05:29.377111 containerd[1565]: 2025-06-21 05:05:29.311 [INFO][4271] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 05:05:29.377111 containerd[1565]: 2025-06-21 05:05:29.311 [INFO][4271] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 05:05:29.377111 containerd[1565]: 2025-06-21 05:05:29.311 [INFO][4271] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 21 05:05:29.377111 containerd[1565]: 2025-06-21 05:05:29.318 [INFO][4271] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e7347ce46b5aa6917353c8c594cda99f638f025dba898ae0350112807975bc84" host="localhost" Jun 21 05:05:29.377111 containerd[1565]: 2025-06-21 05:05:29.323 [INFO][4271] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 21 05:05:29.377111 containerd[1565]: 2025-06-21 05:05:29.326 [INFO][4271] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 21 05:05:29.377111 containerd[1565]: 2025-06-21 05:05:29.328 [INFO][4271] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 21 05:05:29.377111 containerd[1565]: 2025-06-21 05:05:29.332 [INFO][4271] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 21 05:05:29.377111 containerd[1565]: 2025-06-21 05:05:29.332 [INFO][4271] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e7347ce46b5aa6917353c8c594cda99f638f025dba898ae0350112807975bc84" host="localhost" Jun 21 05:05:29.377328 containerd[1565]: 2025-06-21 05:05:29.333 [INFO][4271] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e7347ce46b5aa6917353c8c594cda99f638f025dba898ae0350112807975bc84 Jun 21 05:05:29.377328 containerd[1565]: 2025-06-21 05:05:29.338 [INFO][4271] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e7347ce46b5aa6917353c8c594cda99f638f025dba898ae0350112807975bc84" host="localhost" Jun 21 05:05:29.377328 containerd[1565]: 2025-06-21 05:05:29.346 [INFO][4271] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.e7347ce46b5aa6917353c8c594cda99f638f025dba898ae0350112807975bc84" host="localhost" Jun 21 05:05:29.377328 containerd[1565]: 2025-06-21 05:05:29.346 [INFO][4271] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.e7347ce46b5aa6917353c8c594cda99f638f025dba898ae0350112807975bc84" host="localhost" Jun 21 05:05:29.377328 containerd[1565]: 2025-06-21 05:05:29.346 [INFO][4271] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 05:05:29.377328 containerd[1565]: 2025-06-21 05:05:29.346 [INFO][4271] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="e7347ce46b5aa6917353c8c594cda99f638f025dba898ae0350112807975bc84" HandleID="k8s-pod-network.e7347ce46b5aa6917353c8c594cda99f638f025dba898ae0350112807975bc84" Workload="localhost-k8s-csi--node--driver--gnpdd-eth0" Jun 21 05:05:29.377457 containerd[1565]: 2025-06-21 05:05:29.349 [INFO][4252] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e7347ce46b5aa6917353c8c594cda99f638f025dba898ae0350112807975bc84" Namespace="calico-system" Pod="csi-node-driver-gnpdd" WorkloadEndpoint="localhost-k8s-csi--node--driver--gnpdd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gnpdd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3eee9492-ff67-4a0d-a49e-690cbd0112e0", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 5, 5, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85b8c9d4df", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-gnpdd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliacb76c3ad0a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 05:05:29.377628 containerd[1565]: 2025-06-21 05:05:29.349 [INFO][4252] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="e7347ce46b5aa6917353c8c594cda99f638f025dba898ae0350112807975bc84" Namespace="calico-system" Pod="csi-node-driver-gnpdd" WorkloadEndpoint="localhost-k8s-csi--node--driver--gnpdd-eth0" Jun 21 05:05:29.377628 containerd[1565]: 2025-06-21 05:05:29.349 [INFO][4252] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliacb76c3ad0a ContainerID="e7347ce46b5aa6917353c8c594cda99f638f025dba898ae0350112807975bc84" Namespace="calico-system" Pod="csi-node-driver-gnpdd" WorkloadEndpoint="localhost-k8s-csi--node--driver--gnpdd-eth0" Jun 21 05:05:29.377628 containerd[1565]: 2025-06-21 05:05:29.354 [INFO][4252] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e7347ce46b5aa6917353c8c594cda99f638f025dba898ae0350112807975bc84" Namespace="calico-system" Pod="csi-node-driver-gnpdd" WorkloadEndpoint="localhost-k8s-csi--node--driver--gnpdd-eth0" Jun 21 05:05:29.377698 containerd[1565]: 2025-06-21 05:05:29.357 [INFO][4252] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e7347ce46b5aa6917353c8c594cda99f638f025dba898ae0350112807975bc84" Namespace="calico-system" Pod="csi-node-driver-gnpdd" WorkloadEndpoint="localhost-k8s-csi--node--driver--gnpdd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gnpdd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3eee9492-ff67-4a0d-a49e-690cbd0112e0", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 5, 5, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85b8c9d4df", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e7347ce46b5aa6917353c8c594cda99f638f025dba898ae0350112807975bc84", Pod:"csi-node-driver-gnpdd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliacb76c3ad0a", MAC:"da:57:be:5c:33:7f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 05:05:29.377762 containerd[1565]: 2025-06-21 05:05:29.370 [INFO][4252] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e7347ce46b5aa6917353c8c594cda99f638f025dba898ae0350112807975bc84" Namespace="calico-system" Pod="csi-node-driver-gnpdd" WorkloadEndpoint="localhost-k8s-csi--node--driver--gnpdd-eth0" Jun 21 05:05:29.385246 sshd[4250]: Connection closed by 10.0.0.1 port 41364 Jun 21 05:05:29.385560 sshd-session[4248]: pam_unix(sshd:session): session closed for user core Jun 21 05:05:29.390116 systemd[1]: sshd@7-10.0.0.72:22-10.0.0.1:41364.service: Deactivated successfully. Jun 21 05:05:29.392158 systemd[1]: session-8.scope: Deactivated successfully. Jun 21 05:05:29.393052 systemd-logind[1550]: Session 8 logged out. Waiting for processes to exit. Jun 21 05:05:29.399841 systemd-logind[1550]: Removed session 8. Jun 21 05:05:29.401341 containerd[1565]: time="2025-06-21T05:05:29.401297016Z" level=info msg="connecting to shim e7347ce46b5aa6917353c8c594cda99f638f025dba898ae0350112807975bc84" address="unix:///run/containerd/s/fa8809443b48f34c27ffd9ed5f6d27b9fb2307307bf73e3c6397e4719a87162a" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:05:29.429624 systemd[1]: Started cri-containerd-e7347ce46b5aa6917353c8c594cda99f638f025dba898ae0350112807975bc84.scope - libcontainer container e7347ce46b5aa6917353c8c594cda99f638f025dba898ae0350112807975bc84. Jun 21 05:05:29.440919 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 21 05:05:29.453018 containerd[1565]: time="2025-06-21T05:05:29.452975757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gnpdd,Uid:3eee9492-ff67-4a0d-a49e-690cbd0112e0,Namespace:calico-system,Attempt:0,} returns sandbox id \"e7347ce46b5aa6917353c8c594cda99f638f025dba898ae0350112807975bc84\"" Jun 21 05:05:29.760715 systemd-networkd[1461]: cali305c7f26478: Gained IPv6LL Jun 21 05:05:29.925131 containerd[1565]: time="2025-06-21T05:05:29.925052163Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:05:29.943902 containerd[1565]: time="2025-06-21T05:05:29.926343163Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.1: active requests=0, bytes read=4661202" Jun 21 05:05:29.943902 containerd[1565]: time="2025-06-21T05:05:29.927797802Z" level=info msg="ImageCreate event name:\"sha256:f9c2addb6553484a4cf8cf5e38959c95aff70d213991bb2626aab9eb9b0ce51c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:05:29.944105 containerd[1565]: time="2025-06-21T05:05:29.931108966Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.1\" with image id \"sha256:f9c2addb6553484a4cf8cf5e38959c95aff70d213991bb2626aab9eb9b0ce51c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:7f323954f2f741238d256690a674536bf562d4b4bd7cd6bab3c21a0a1327e1fc\", size \"6153897\" in 2.425147843s" Jun 21 05:05:29.944105 containerd[1565]: time="2025-06-21T05:05:29.944039408Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.1\" returns image reference \"sha256:f9c2addb6553484a4cf8cf5e38959c95aff70d213991bb2626aab9eb9b0ce51c\"" Jun 21 05:05:29.946177 containerd[1565]: time="2025-06-21T05:05:29.946121308Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:7f323954f2f741238d256690a674536bf562d4b4bd7cd6bab3c21a0a1327e1fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:05:29.947504 containerd[1565]: time="2025-06-21T05:05:29.947450420Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\"" Jun 21 05:05:29.948427 containerd[1565]: time="2025-06-21T05:05:29.948363780Z" level=info msg="CreateContainer within sandbox \"0cff4da261e4e7c833fa2e17cfede18780f471878478f524dd204849138813e8\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jun 21 05:05:29.974512 containerd[1565]: time="2025-06-21T05:05:29.974442807Z" level=info msg="Container d4951a81b82d42b6233b51eeaa929f3890f16f673de0f027a98bcec3c4e09427: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:05:29.985439 containerd[1565]: time="2025-06-21T05:05:29.985187885Z" level=info msg="CreateContainer within sandbox \"0cff4da261e4e7c833fa2e17cfede18780f471878478f524dd204849138813e8\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"d4951a81b82d42b6233b51eeaa929f3890f16f673de0f027a98bcec3c4e09427\"" Jun 21 05:05:29.986937 containerd[1565]: time="2025-06-21T05:05:29.986898756Z" level=info msg="StartContainer for \"d4951a81b82d42b6233b51eeaa929f3890f16f673de0f027a98bcec3c4e09427\"" Jun 21 05:05:29.990856 containerd[1565]: time="2025-06-21T05:05:29.990809428Z" level=info msg="connecting to shim d4951a81b82d42b6233b51eeaa929f3890f16f673de0f027a98bcec3c4e09427" address="unix:///run/containerd/s/1d0fb60c50caeaa313770ba0343b948e5fbe9ea10c547ae26617faa85608980b" protocol=ttrpc version=3 Jun 21 05:05:30.014694 systemd[1]: Started cri-containerd-d4951a81b82d42b6233b51eeaa929f3890f16f673de0f027a98bcec3c4e09427.scope - libcontainer container d4951a81b82d42b6233b51eeaa929f3890f16f673de0f027a98bcec3c4e09427. Jun 21 05:05:30.069353 containerd[1565]: time="2025-06-21T05:05:30.069306159Z" level=info msg="StartContainer for \"d4951a81b82d42b6233b51eeaa929f3890f16f673de0f027a98bcec3c4e09427\" returns successfully" Jun 21 05:05:30.244776 kubelet[2681]: E0621 05:05:30.244718 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 05:05:30.245435 containerd[1565]: time="2025-06-21T05:05:30.245392209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b855447fc-gjvrv,Uid:1c5ebf5d-9ea4-4847-b325-026a751564b0,Namespace:calico-apiserver,Attempt:0,}" Jun 21 05:05:30.245951 containerd[1565]: time="2025-06-21T05:05:30.245918159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dljws,Uid:240bb991-39e3-416d-9e70-c9d62b670e47,Namespace:kube-system,Attempt:0,}" Jun 21 05:05:30.369399 systemd-networkd[1461]: cali959e595346c: Link UP Jun 21 05:05:30.370666 systemd-networkd[1461]: cali959e595346c: Gained carrier Jun 21 05:05:30.382581 containerd[1565]: 2025-06-21 05:05:30.287 [INFO][4409] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 21 05:05:30.382581 containerd[1565]: 2025-06-21 05:05:30.299 [INFO][4409] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--dljws-eth0 coredns-668d6bf9bc- kube-system 240bb991-39e3-416d-9e70-c9d62b670e47 850 0 2025-06-21 05:04:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-dljws eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali959e595346c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ffff182fa6bf131a0ec42a763279d76636228f291040c662616188c8722f8fc0" Namespace="kube-system" Pod="coredns-668d6bf9bc-dljws" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dljws-" Jun 21 05:05:30.382581 containerd[1565]: 2025-06-21 05:05:30.299 [INFO][4409] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ffff182fa6bf131a0ec42a763279d76636228f291040c662616188c8722f8fc0" Namespace="kube-system" Pod="coredns-668d6bf9bc-dljws" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dljws-eth0" Jun 21 05:05:30.382581 containerd[1565]: 2025-06-21 05:05:30.330 [INFO][4435] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ffff182fa6bf131a0ec42a763279d76636228f291040c662616188c8722f8fc0" HandleID="k8s-pod-network.ffff182fa6bf131a0ec42a763279d76636228f291040c662616188c8722f8fc0" Workload="localhost-k8s-coredns--668d6bf9bc--dljws-eth0" Jun 21 05:05:30.383008 containerd[1565]: 2025-06-21 05:05:30.331 [INFO][4435] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ffff182fa6bf131a0ec42a763279d76636228f291040c662616188c8722f8fc0" HandleID="k8s-pod-network.ffff182fa6bf131a0ec42a763279d76636228f291040c662616188c8722f8fc0" Workload="localhost-k8s-coredns--668d6bf9bc--dljws-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e4f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-dljws", "timestamp":"2025-06-21 05:05:30.330862411 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 05:05:30.383008 containerd[1565]: 2025-06-21 05:05:30.331 [INFO][4435] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 05:05:30.383008 containerd[1565]: 2025-06-21 05:05:30.331 [INFO][4435] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 05:05:30.383008 containerd[1565]: 2025-06-21 05:05:30.331 [INFO][4435] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 21 05:05:30.383008 containerd[1565]: 2025-06-21 05:05:30.339 [INFO][4435] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ffff182fa6bf131a0ec42a763279d76636228f291040c662616188c8722f8fc0" host="localhost" Jun 21 05:05:30.383008 containerd[1565]: 2025-06-21 05:05:30.343 [INFO][4435] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 21 05:05:30.383008 containerd[1565]: 2025-06-21 05:05:30.347 [INFO][4435] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 21 05:05:30.383008 containerd[1565]: 2025-06-21 05:05:30.349 [INFO][4435] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 21 05:05:30.383008 containerd[1565]: 2025-06-21 05:05:30.351 [INFO][4435] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 21 05:05:30.383008 containerd[1565]: 2025-06-21 05:05:30.351 [INFO][4435] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ffff182fa6bf131a0ec42a763279d76636228f291040c662616188c8722f8fc0" host="localhost" Jun 21 05:05:30.383258 containerd[1565]: 2025-06-21 05:05:30.352 [INFO][4435] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ffff182fa6bf131a0ec42a763279d76636228f291040c662616188c8722f8fc0 Jun 21 05:05:30.383258 containerd[1565]: 2025-06-21 05:05:30.355 [INFO][4435] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ffff182fa6bf131a0ec42a763279d76636228f291040c662616188c8722f8fc0" host="localhost" Jun 21 05:05:30.383258 containerd[1565]: 2025-06-21 05:05:30.362 [INFO][4435] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.ffff182fa6bf131a0ec42a763279d76636228f291040c662616188c8722f8fc0" host="localhost" Jun 21 05:05:30.383258 containerd[1565]: 2025-06-21 05:05:30.362 [INFO][4435] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.ffff182fa6bf131a0ec42a763279d76636228f291040c662616188c8722f8fc0" host="localhost" Jun 21 05:05:30.383258 containerd[1565]: 2025-06-21 05:05:30.362 [INFO][4435] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 05:05:30.383258 containerd[1565]: 2025-06-21 05:05:30.363 [INFO][4435] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="ffff182fa6bf131a0ec42a763279d76636228f291040c662616188c8722f8fc0" HandleID="k8s-pod-network.ffff182fa6bf131a0ec42a763279d76636228f291040c662616188c8722f8fc0" Workload="localhost-k8s-coredns--668d6bf9bc--dljws-eth0" Jun 21 05:05:30.383388 containerd[1565]: 2025-06-21 05:05:30.367 [INFO][4409] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ffff182fa6bf131a0ec42a763279d76636228f291040c662616188c8722f8fc0" Namespace="kube-system" Pod="coredns-668d6bf9bc-dljws" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dljws-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--dljws-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"240bb991-39e3-416d-9e70-c9d62b670e47", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 5, 4, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-dljws", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali959e595346c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 05:05:30.383470 containerd[1565]: 2025-06-21 05:05:30.367 [INFO][4409] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="ffff182fa6bf131a0ec42a763279d76636228f291040c662616188c8722f8fc0" Namespace="kube-system" Pod="coredns-668d6bf9bc-dljws" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dljws-eth0" Jun 21 05:05:30.383470 containerd[1565]: 2025-06-21 05:05:30.367 [INFO][4409] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali959e595346c ContainerID="ffff182fa6bf131a0ec42a763279d76636228f291040c662616188c8722f8fc0" Namespace="kube-system" Pod="coredns-668d6bf9bc-dljws" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dljws-eth0" Jun 21 05:05:30.383470 containerd[1565]: 2025-06-21 05:05:30.370 [INFO][4409] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ffff182fa6bf131a0ec42a763279d76636228f291040c662616188c8722f8fc0" Namespace="kube-system" Pod="coredns-668d6bf9bc-dljws" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dljws-eth0" Jun 21 05:05:30.383569 containerd[1565]: 2025-06-21 05:05:30.370 [INFO][4409] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ffff182fa6bf131a0ec42a763279d76636228f291040c662616188c8722f8fc0" Namespace="kube-system" Pod="coredns-668d6bf9bc-dljws" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dljws-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--dljws-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"240bb991-39e3-416d-9e70-c9d62b670e47", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 5, 4, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ffff182fa6bf131a0ec42a763279d76636228f291040c662616188c8722f8fc0", Pod:"coredns-668d6bf9bc-dljws", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali959e595346c", MAC:"52:c5:7f:ad:0e:a1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 05:05:30.383569 containerd[1565]: 2025-06-21 05:05:30.379 [INFO][4409] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ffff182fa6bf131a0ec42a763279d76636228f291040c662616188c8722f8fc0" Namespace="kube-system" Pod="coredns-668d6bf9bc-dljws" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dljws-eth0" Jun 21 05:05:30.402742 containerd[1565]: time="2025-06-21T05:05:30.402687170Z" level=info msg="connecting to shim ffff182fa6bf131a0ec42a763279d76636228f291040c662616188c8722f8fc0" address="unix:///run/containerd/s/06e0a8c9d0b112b51a96dd03e2f706545b55db62520e594372fd6eb76213a877" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:05:30.426607 systemd[1]: Started cri-containerd-ffff182fa6bf131a0ec42a763279d76636228f291040c662616188c8722f8fc0.scope - libcontainer container ffff182fa6bf131a0ec42a763279d76636228f291040c662616188c8722f8fc0. Jun 21 05:05:30.438906 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 21 05:05:30.470816 containerd[1565]: time="2025-06-21T05:05:30.470765376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dljws,Uid:240bb991-39e3-416d-9e70-c9d62b670e47,Namespace:kube-system,Attempt:0,} returns sandbox id \"ffff182fa6bf131a0ec42a763279d76636228f291040c662616188c8722f8fc0\"" Jun 21 05:05:30.471922 kubelet[2681]: E0621 05:05:30.471885 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 05:05:30.473589 systemd-networkd[1461]: calib2b9ccc7198: Link UP Jun 21 05:05:30.473782 systemd-networkd[1461]: calib2b9ccc7198: Gained carrier Jun 21 05:05:30.475239 containerd[1565]: time="2025-06-21T05:05:30.475151191Z" level=info msg="CreateContainer within sandbox \"ffff182fa6bf131a0ec42a763279d76636228f291040c662616188c8722f8fc0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 21 05:05:30.489986 containerd[1565]: time="2025-06-21T05:05:30.489924208Z" level=info msg="Container c83a853b80d68103fb2d90be752091755402955d4504f1e55cbc1f60902106e6: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:05:30.490237 containerd[1565]: 2025-06-21 05:05:30.290 [INFO][4403] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 21 05:05:30.490237 containerd[1565]: 2025-06-21 05:05:30.302 [INFO][4403] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--b855447fc--gjvrv-eth0 calico-apiserver-b855447fc- calico-apiserver 1c5ebf5d-9ea4-4847-b325-026a751564b0 857 0 2025-06-21 05:05:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:b855447fc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-b855447fc-gjvrv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib2b9ccc7198 [] [] }} ContainerID="064c7aaf8876de14622204d015d5edbc6ffb7d240227e315930f1a2ba3cdc447" Namespace="calico-apiserver" Pod="calico-apiserver-b855447fc-gjvrv" WorkloadEndpoint="localhost-k8s-calico--apiserver--b855447fc--gjvrv-" Jun 21 05:05:30.490237 containerd[1565]: 2025-06-21 05:05:30.302 [INFO][4403] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="064c7aaf8876de14622204d015d5edbc6ffb7d240227e315930f1a2ba3cdc447" Namespace="calico-apiserver" Pod="calico-apiserver-b855447fc-gjvrv" WorkloadEndpoint="localhost-k8s-calico--apiserver--b855447fc--gjvrv-eth0" Jun 21 05:05:30.490237 containerd[1565]: 2025-06-21 05:05:30.331 [INFO][4437] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="064c7aaf8876de14622204d015d5edbc6ffb7d240227e315930f1a2ba3cdc447" HandleID="k8s-pod-network.064c7aaf8876de14622204d015d5edbc6ffb7d240227e315930f1a2ba3cdc447" Workload="localhost-k8s-calico--apiserver--b855447fc--gjvrv-eth0" Jun 21 05:05:30.490237 containerd[1565]: 2025-06-21 05:05:30.332 [INFO][4437] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="064c7aaf8876de14622204d015d5edbc6ffb7d240227e315930f1a2ba3cdc447" HandleID="k8s-pod-network.064c7aaf8876de14622204d015d5edbc6ffb7d240227e315930f1a2ba3cdc447" Workload="localhost-k8s-calico--apiserver--b855447fc--gjvrv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7050), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-b855447fc-gjvrv", "timestamp":"2025-06-21 05:05:30.331773205 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 05:05:30.490237 containerd[1565]: 2025-06-21 05:05:30.332 [INFO][4437] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 05:05:30.490237 containerd[1565]: 2025-06-21 05:05:30.363 [INFO][4437] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 05:05:30.490237 containerd[1565]: 2025-06-21 05:05:30.363 [INFO][4437] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 21 05:05:30.490237 containerd[1565]: 2025-06-21 05:05:30.440 [INFO][4437] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.064c7aaf8876de14622204d015d5edbc6ffb7d240227e315930f1a2ba3cdc447" host="localhost" Jun 21 05:05:30.490237 containerd[1565]: 2025-06-21 05:05:30.445 [INFO][4437] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 21 05:05:30.490237 containerd[1565]: 2025-06-21 05:05:30.449 [INFO][4437] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 21 05:05:30.490237 containerd[1565]: 2025-06-21 05:05:30.450 [INFO][4437] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 21 05:05:30.490237 containerd[1565]: 2025-06-21 05:05:30.452 [INFO][4437] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 21 05:05:30.490237 containerd[1565]: 2025-06-21 05:05:30.452 [INFO][4437] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.064c7aaf8876de14622204d015d5edbc6ffb7d240227e315930f1a2ba3cdc447" host="localhost" Jun 21 05:05:30.490237 containerd[1565]: 2025-06-21 05:05:30.453 [INFO][4437] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.064c7aaf8876de14622204d015d5edbc6ffb7d240227e315930f1a2ba3cdc447 Jun 21 05:05:30.490237 containerd[1565]: 2025-06-21 05:05:30.458 [INFO][4437] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.064c7aaf8876de14622204d015d5edbc6ffb7d240227e315930f1a2ba3cdc447" host="localhost" Jun 21 05:05:30.490237 containerd[1565]: 2025-06-21 05:05:30.468 [INFO][4437] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.064c7aaf8876de14622204d015d5edbc6ffb7d240227e315930f1a2ba3cdc447" host="localhost" Jun 21 05:05:30.490237 containerd[1565]: 2025-06-21 05:05:30.468 [INFO][4437] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.064c7aaf8876de14622204d015d5edbc6ffb7d240227e315930f1a2ba3cdc447" host="localhost" Jun 21 05:05:30.490237 containerd[1565]: 2025-06-21 05:05:30.468 [INFO][4437] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 05:05:30.490237 containerd[1565]: 2025-06-21 05:05:30.468 [INFO][4437] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="064c7aaf8876de14622204d015d5edbc6ffb7d240227e315930f1a2ba3cdc447" HandleID="k8s-pod-network.064c7aaf8876de14622204d015d5edbc6ffb7d240227e315930f1a2ba3cdc447" Workload="localhost-k8s-calico--apiserver--b855447fc--gjvrv-eth0" Jun 21 05:05:30.490791 containerd[1565]: 2025-06-21 05:05:30.471 [INFO][4403] cni-plugin/k8s.go 418: Populated endpoint ContainerID="064c7aaf8876de14622204d015d5edbc6ffb7d240227e315930f1a2ba3cdc447" Namespace="calico-apiserver" Pod="calico-apiserver-b855447fc-gjvrv" WorkloadEndpoint="localhost-k8s-calico--apiserver--b855447fc--gjvrv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b855447fc--gjvrv-eth0", GenerateName:"calico-apiserver-b855447fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"1c5ebf5d-9ea4-4847-b325-026a751564b0", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 5, 5, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b855447fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-b855447fc-gjvrv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib2b9ccc7198", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 05:05:30.490791 containerd[1565]: 2025-06-21 05:05:30.471 [INFO][4403] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="064c7aaf8876de14622204d015d5edbc6ffb7d240227e315930f1a2ba3cdc447" Namespace="calico-apiserver" Pod="calico-apiserver-b855447fc-gjvrv" WorkloadEndpoint="localhost-k8s-calico--apiserver--b855447fc--gjvrv-eth0" Jun 21 05:05:30.490791 containerd[1565]: 2025-06-21 05:05:30.471 [INFO][4403] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib2b9ccc7198 ContainerID="064c7aaf8876de14622204d015d5edbc6ffb7d240227e315930f1a2ba3cdc447" Namespace="calico-apiserver" Pod="calico-apiserver-b855447fc-gjvrv" WorkloadEndpoint="localhost-k8s-calico--apiserver--b855447fc--gjvrv-eth0" Jun 21 05:05:30.490791 containerd[1565]: 2025-06-21 05:05:30.473 [INFO][4403] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="064c7aaf8876de14622204d015d5edbc6ffb7d240227e315930f1a2ba3cdc447" Namespace="calico-apiserver" Pod="calico-apiserver-b855447fc-gjvrv" WorkloadEndpoint="localhost-k8s-calico--apiserver--b855447fc--gjvrv-eth0" Jun 21 05:05:30.490791 containerd[1565]: 2025-06-21 05:05:30.475 [INFO][4403] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="064c7aaf8876de14622204d015d5edbc6ffb7d240227e315930f1a2ba3cdc447" Namespace="calico-apiserver" Pod="calico-apiserver-b855447fc-gjvrv" WorkloadEndpoint="localhost-k8s-calico--apiserver--b855447fc--gjvrv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b855447fc--gjvrv-eth0", GenerateName:"calico-apiserver-b855447fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"1c5ebf5d-9ea4-4847-b325-026a751564b0", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 5, 5, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b855447fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"064c7aaf8876de14622204d015d5edbc6ffb7d240227e315930f1a2ba3cdc447", Pod:"calico-apiserver-b855447fc-gjvrv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib2b9ccc7198", MAC:"96:b9:2e:81:7f:e7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 05:05:30.490791 containerd[1565]: 2025-06-21 05:05:30.487 [INFO][4403] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="064c7aaf8876de14622204d015d5edbc6ffb7d240227e315930f1a2ba3cdc447" Namespace="calico-apiserver" Pod="calico-apiserver-b855447fc-gjvrv" WorkloadEndpoint="localhost-k8s-calico--apiserver--b855447fc--gjvrv-eth0" Jun 21 05:05:30.499755 containerd[1565]: time="2025-06-21T05:05:30.499697142Z" level=info msg="CreateContainer within sandbox \"ffff182fa6bf131a0ec42a763279d76636228f291040c662616188c8722f8fc0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c83a853b80d68103fb2d90be752091755402955d4504f1e55cbc1f60902106e6\"" Jun 21 05:05:30.500508 containerd[1565]: time="2025-06-21T05:05:30.500412949Z" level=info msg="StartContainer for \"c83a853b80d68103fb2d90be752091755402955d4504f1e55cbc1f60902106e6\"" Jun 21 05:05:30.501443 containerd[1565]: time="2025-06-21T05:05:30.501414083Z" level=info msg="connecting to shim c83a853b80d68103fb2d90be752091755402955d4504f1e55cbc1f60902106e6" address="unix:///run/containerd/s/06e0a8c9d0b112b51a96dd03e2f706545b55db62520e594372fd6eb76213a877" protocol=ttrpc version=3 Jun 21 05:05:30.518694 containerd[1565]: time="2025-06-21T05:05:30.518636950Z" level=info msg="connecting to shim 064c7aaf8876de14622204d015d5edbc6ffb7d240227e315930f1a2ba3cdc447" address="unix:///run/containerd/s/1a7bfd27f7f43688418a068983a0a0c4490875de5b86bd878cb9dbe867f6a99c" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:05:30.525803 systemd[1]: Started cri-containerd-c83a853b80d68103fb2d90be752091755402955d4504f1e55cbc1f60902106e6.scope - libcontainer container c83a853b80d68103fb2d90be752091755402955d4504f1e55cbc1f60902106e6. Jun 21 05:05:30.547686 systemd[1]: Started cri-containerd-064c7aaf8876de14622204d015d5edbc6ffb7d240227e315930f1a2ba3cdc447.scope - libcontainer container 064c7aaf8876de14622204d015d5edbc6ffb7d240227e315930f1a2ba3cdc447. Jun 21 05:05:30.559475 containerd[1565]: time="2025-06-21T05:05:30.559439479Z" level=info msg="StartContainer for \"c83a853b80d68103fb2d90be752091755402955d4504f1e55cbc1f60902106e6\" returns successfully" Jun 21 05:05:30.563358 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 21 05:05:30.594591 containerd[1565]: time="2025-06-21T05:05:30.594475674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b855447fc-gjvrv,Uid:1c5ebf5d-9ea4-4847-b325-026a751564b0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"064c7aaf8876de14622204d015d5edbc6ffb7d240227e315930f1a2ba3cdc447\"" Jun 21 05:05:30.656668 systemd-networkd[1461]: caliacb76c3ad0a: Gained IPv6LL Jun 21 05:05:31.244808 kubelet[2681]: E0621 05:05:31.244764 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 05:05:31.245359 containerd[1565]: time="2025-06-21T05:05:31.245314561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-96dq5,Uid:daacbc27-7e95-4a0a-8d82-158302d37be1,Namespace:kube-system,Attempt:0,}" Jun 21 05:05:31.357703 systemd-networkd[1461]: calif9d0df77ce4: Link UP Jun 21 05:05:31.358186 systemd-networkd[1461]: calif9d0df77ce4: Gained carrier Jun 21 05:05:31.375426 containerd[1565]: 2025-06-21 05:05:31.270 [INFO][4600] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 21 05:05:31.375426 containerd[1565]: 2025-06-21 05:05:31.283 [INFO][4600] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--96dq5-eth0 coredns-668d6bf9bc- kube-system daacbc27-7e95-4a0a-8d82-158302d37be1 856 0 2025-06-21 05:04:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-96dq5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif9d0df77ce4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="73b412be4fb20d13963c331e26997c7b4a51e973a75e5ceb9eaca1b53206e720" Namespace="kube-system" Pod="coredns-668d6bf9bc-96dq5" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--96dq5-" Jun 21 05:05:31.375426 containerd[1565]: 2025-06-21 05:05:31.283 [INFO][4600] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="73b412be4fb20d13963c331e26997c7b4a51e973a75e5ceb9eaca1b53206e720" Namespace="kube-system" Pod="coredns-668d6bf9bc-96dq5" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--96dq5-eth0" Jun 21 05:05:31.375426 containerd[1565]: 2025-06-21 05:05:31.317 [INFO][4625] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="73b412be4fb20d13963c331e26997c7b4a51e973a75e5ceb9eaca1b53206e720" HandleID="k8s-pod-network.73b412be4fb20d13963c331e26997c7b4a51e973a75e5ceb9eaca1b53206e720" Workload="localhost-k8s-coredns--668d6bf9bc--96dq5-eth0" Jun 21 05:05:31.375426 containerd[1565]: 2025-06-21 05:05:31.317 [INFO][4625] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="73b412be4fb20d13963c331e26997c7b4a51e973a75e5ceb9eaca1b53206e720" HandleID="k8s-pod-network.73b412be4fb20d13963c331e26997c7b4a51e973a75e5ceb9eaca1b53206e720" Workload="localhost-k8s-coredns--668d6bf9bc--96dq5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a5740), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-96dq5", "timestamp":"2025-06-21 05:05:31.317062347 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 21 05:05:31.375426 containerd[1565]: 2025-06-21 05:05:31.317 [INFO][4625] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 21 05:05:31.375426 containerd[1565]: 2025-06-21 05:05:31.317 [INFO][4625] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 21 05:05:31.375426 containerd[1565]: 2025-06-21 05:05:31.317 [INFO][4625] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 21 05:05:31.375426 containerd[1565]: 2025-06-21 05:05:31.324 [INFO][4625] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.73b412be4fb20d13963c331e26997c7b4a51e973a75e5ceb9eaca1b53206e720" host="localhost" Jun 21 05:05:31.375426 containerd[1565]: 2025-06-21 05:05:31.330 [INFO][4625] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 21 05:05:31.375426 containerd[1565]: 2025-06-21 05:05:31.335 [INFO][4625] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 21 05:05:31.375426 containerd[1565]: 2025-06-21 05:05:31.337 [INFO][4625] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 21 05:05:31.375426 containerd[1565]: 2025-06-21 05:05:31.339 [INFO][4625] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 21 05:05:31.375426 containerd[1565]: 2025-06-21 05:05:31.339 [INFO][4625] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.73b412be4fb20d13963c331e26997c7b4a51e973a75e5ceb9eaca1b53206e720" host="localhost" Jun 21 05:05:31.375426 containerd[1565]: 2025-06-21 05:05:31.340 [INFO][4625] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.73b412be4fb20d13963c331e26997c7b4a51e973a75e5ceb9eaca1b53206e720 Jun 21 05:05:31.375426 containerd[1565]: 2025-06-21 05:05:31.345 [INFO][4625] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.73b412be4fb20d13963c331e26997c7b4a51e973a75e5ceb9eaca1b53206e720" host="localhost" Jun 21 05:05:31.375426 containerd[1565]: 2025-06-21 05:05:31.351 [INFO][4625] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.73b412be4fb20d13963c331e26997c7b4a51e973a75e5ceb9eaca1b53206e720" host="localhost" Jun 21 05:05:31.375426 containerd[1565]: 2025-06-21 05:05:31.351 [INFO][4625] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.73b412be4fb20d13963c331e26997c7b4a51e973a75e5ceb9eaca1b53206e720" host="localhost" Jun 21 05:05:31.375426 containerd[1565]: 2025-06-21 05:05:31.351 [INFO][4625] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 21 05:05:31.375426 containerd[1565]: 2025-06-21 05:05:31.351 [INFO][4625] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="73b412be4fb20d13963c331e26997c7b4a51e973a75e5ceb9eaca1b53206e720" HandleID="k8s-pod-network.73b412be4fb20d13963c331e26997c7b4a51e973a75e5ceb9eaca1b53206e720" Workload="localhost-k8s-coredns--668d6bf9bc--96dq5-eth0" Jun 21 05:05:31.376533 containerd[1565]: 2025-06-21 05:05:31.355 [INFO][4600] cni-plugin/k8s.go 418: Populated endpoint ContainerID="73b412be4fb20d13963c331e26997c7b4a51e973a75e5ceb9eaca1b53206e720" Namespace="kube-system" Pod="coredns-668d6bf9bc-96dq5" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--96dq5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--96dq5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"daacbc27-7e95-4a0a-8d82-158302d37be1", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 5, 4, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-96dq5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif9d0df77ce4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 05:05:31.376533 containerd[1565]: 2025-06-21 05:05:31.355 [INFO][4600] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="73b412be4fb20d13963c331e26997c7b4a51e973a75e5ceb9eaca1b53206e720" Namespace="kube-system" Pod="coredns-668d6bf9bc-96dq5" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--96dq5-eth0" Jun 21 05:05:31.376533 containerd[1565]: 2025-06-21 05:05:31.355 [INFO][4600] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif9d0df77ce4 ContainerID="73b412be4fb20d13963c331e26997c7b4a51e973a75e5ceb9eaca1b53206e720" Namespace="kube-system" Pod="coredns-668d6bf9bc-96dq5" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--96dq5-eth0" Jun 21 05:05:31.376533 containerd[1565]: 2025-06-21 05:05:31.357 [INFO][4600] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="73b412be4fb20d13963c331e26997c7b4a51e973a75e5ceb9eaca1b53206e720" Namespace="kube-system" Pod="coredns-668d6bf9bc-96dq5" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--96dq5-eth0" Jun 21 05:05:31.376533 containerd[1565]: 2025-06-21 05:05:31.358 [INFO][4600] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="73b412be4fb20d13963c331e26997c7b4a51e973a75e5ceb9eaca1b53206e720" Namespace="kube-system" Pod="coredns-668d6bf9bc-96dq5" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--96dq5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--96dq5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"daacbc27-7e95-4a0a-8d82-158302d37be1", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.June, 21, 5, 4, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"73b412be4fb20d13963c331e26997c7b4a51e973a75e5ceb9eaca1b53206e720", Pod:"coredns-668d6bf9bc-96dq5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif9d0df77ce4", MAC:"0e:2b:f7:ce:01:e3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 21 05:05:31.376533 containerd[1565]: 2025-06-21 05:05:31.371 [INFO][4600] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="73b412be4fb20d13963c331e26997c7b4a51e973a75e5ceb9eaca1b53206e720" Namespace="kube-system" Pod="coredns-668d6bf9bc-96dq5" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--96dq5-eth0" Jun 21 05:05:31.386984 kubelet[2681]: E0621 05:05:31.386951 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 05:05:31.399575 kubelet[2681]: I0621 05:05:31.399504 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-dljws" podStartSLOduration=36.39945741 podStartE2EDuration="36.39945741s" podCreationTimestamp="2025-06-21 05:04:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 05:05:31.39779428 +0000 UTC m=+40.441355885" watchObservedRunningTime="2025-06-21 05:05:31.39945741 +0000 UTC m=+40.443019015" Jun 21 05:05:31.424049 containerd[1565]: time="2025-06-21T05:05:31.423978825Z" level=info msg="connecting to shim 73b412be4fb20d13963c331e26997c7b4a51e973a75e5ceb9eaca1b53206e720" address="unix:///run/containerd/s/5b818a092b55a5032c0e7a5afe114ab35eaa5165aacf0af2263f3ad017667076" namespace=k8s.io protocol=ttrpc version=3 Jun 21 05:05:31.453620 systemd[1]: Started cri-containerd-73b412be4fb20d13963c331e26997c7b4a51e973a75e5ceb9eaca1b53206e720.scope - libcontainer container 73b412be4fb20d13963c331e26997c7b4a51e973a75e5ceb9eaca1b53206e720. Jun 21 05:05:31.466249 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 21 05:05:31.497292 containerd[1565]: time="2025-06-21T05:05:31.497179655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-96dq5,Uid:daacbc27-7e95-4a0a-8d82-158302d37be1,Namespace:kube-system,Attempt:0,} returns sandbox id \"73b412be4fb20d13963c331e26997c7b4a51e973a75e5ceb9eaca1b53206e720\"" Jun 21 05:05:31.498240 kubelet[2681]: E0621 05:05:31.498212 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 05:05:31.500350 containerd[1565]: time="2025-06-21T05:05:31.500296471Z" level=info msg="CreateContainer within sandbox \"73b412be4fb20d13963c331e26997c7b4a51e973a75e5ceb9eaca1b53206e720\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 21 05:05:31.511148 containerd[1565]: time="2025-06-21T05:05:31.511095092Z" level=info msg="Container eae177078fb0f4691822b93d1d059331b2377a7769ffc3c39e426b44e1918230: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:05:31.517946 containerd[1565]: time="2025-06-21T05:05:31.517915126Z" level=info msg="CreateContainer within sandbox \"73b412be4fb20d13963c331e26997c7b4a51e973a75e5ceb9eaca1b53206e720\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"eae177078fb0f4691822b93d1d059331b2377a7769ffc3c39e426b44e1918230\"" Jun 21 05:05:31.518517 containerd[1565]: time="2025-06-21T05:05:31.518462055Z" level=info msg="StartContainer for \"eae177078fb0f4691822b93d1d059331b2377a7769ffc3c39e426b44e1918230\"" Jun 21 05:05:31.519240 containerd[1565]: time="2025-06-21T05:05:31.519204352Z" level=info msg="connecting to shim eae177078fb0f4691822b93d1d059331b2377a7769ffc3c39e426b44e1918230" address="unix:///run/containerd/s/5b818a092b55a5032c0e7a5afe114ab35eaa5165aacf0af2263f3ad017667076" protocol=ttrpc version=3 Jun 21 05:05:31.540656 systemd[1]: Started cri-containerd-eae177078fb0f4691822b93d1d059331b2377a7769ffc3c39e426b44e1918230.scope - libcontainer container eae177078fb0f4691822b93d1d059331b2377a7769ffc3c39e426b44e1918230. Jun 21 05:05:31.573985 containerd[1565]: time="2025-06-21T05:05:31.573899749Z" level=info msg="StartContainer for \"eae177078fb0f4691822b93d1d059331b2377a7769ffc3c39e426b44e1918230\" returns successfully" Jun 21 05:05:31.681637 systemd-networkd[1461]: cali959e595346c: Gained IPv6LL Jun 21 05:05:32.130589 systemd-networkd[1461]: calib2b9ccc7198: Gained IPv6LL Jun 21 05:05:32.395801 kubelet[2681]: E0621 05:05:32.395676 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 05:05:32.396279 kubelet[2681]: E0621 05:05:32.396260 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 05:05:32.410334 kubelet[2681]: I0621 05:05:32.409433 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-96dq5" podStartSLOduration=37.409416685 podStartE2EDuration="37.409416685s" podCreationTimestamp="2025-06-21 05:04:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 05:05:32.408309041 +0000 UTC m=+41.451870646" watchObservedRunningTime="2025-06-21 05:05:32.409416685 +0000 UTC m=+41.452978280" Jun 21 05:05:32.641767 systemd-networkd[1461]: calif9d0df77ce4: Gained IPv6LL Jun 21 05:05:32.849088 containerd[1565]: time="2025-06-21T05:05:32.849031352Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:05:32.849740 containerd[1565]: time="2025-06-21T05:05:32.849714688Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.1: active requests=0, bytes read=47305653" Jun 21 05:05:32.851104 containerd[1565]: time="2025-06-21T05:05:32.851057504Z" level=info msg="ImageCreate event name:\"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:05:32.853216 containerd[1565]: time="2025-06-21T05:05:32.853186409Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:05:32.853799 containerd[1565]: time="2025-06-21T05:05:32.853775036Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" with image id \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\", size \"48798372\" in 2.906293608s" Jun 21 05:05:32.853854 containerd[1565]: time="2025-06-21T05:05:32.853804162Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" returns image reference \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\"" Jun 21 05:05:32.854933 containerd[1565]: time="2025-06-21T05:05:32.854895644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.1\"" Jun 21 05:05:32.856085 containerd[1565]: time="2025-06-21T05:05:32.856054696Z" level=info msg="CreateContainer within sandbox \"b59c7870e8fc1406ce01c25ccc9b8879f6270f276e53289fb2b69c497943bcbc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 21 05:05:32.866216 containerd[1565]: time="2025-06-21T05:05:32.866162843Z" level=info msg="Container 86ed196155ba792b39d580e96ac7e13271568c680b272b47be06bcb89847fc78: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:05:32.874343 containerd[1565]: time="2025-06-21T05:05:32.874283622Z" level=info msg="CreateContainer within sandbox \"b59c7870e8fc1406ce01c25ccc9b8879f6270f276e53289fb2b69c497943bcbc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"86ed196155ba792b39d580e96ac7e13271568c680b272b47be06bcb89847fc78\"" Jun 21 05:05:32.874912 containerd[1565]: time="2025-06-21T05:05:32.874873322Z" level=info msg="StartContainer for \"86ed196155ba792b39d580e96ac7e13271568c680b272b47be06bcb89847fc78\"" Jun 21 05:05:32.876128 containerd[1565]: time="2025-06-21T05:05:32.876093798Z" level=info msg="connecting to shim 86ed196155ba792b39d580e96ac7e13271568c680b272b47be06bcb89847fc78" address="unix:///run/containerd/s/2466a9a82c3bd1e0724a481c052da771f974558397ec85c1f4734d3a774de085" protocol=ttrpc version=3 Jun 21 05:05:32.898635 systemd[1]: Started cri-containerd-86ed196155ba792b39d580e96ac7e13271568c680b272b47be06bcb89847fc78.scope - libcontainer container 86ed196155ba792b39d580e96ac7e13271568c680b272b47be06bcb89847fc78. Jun 21 05:05:32.950414 containerd[1565]: time="2025-06-21T05:05:32.950360939Z" level=info msg="StartContainer for \"86ed196155ba792b39d580e96ac7e13271568c680b272b47be06bcb89847fc78\" returns successfully" Jun 21 05:05:33.401847 kubelet[2681]: E0621 05:05:33.401786 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 05:05:33.404952 kubelet[2681]: E0621 05:05:33.404891 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 05:05:34.052154 kubelet[2681]: I0621 05:05:34.052104 2681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 21 05:05:34.160017 containerd[1565]: time="2025-06-21T05:05:34.159963998Z" level=info msg="TaskExit event in podsandbox handler container_id:\"79288db2e582815ada5e65364c8e172776a392db1ff1e5de0f53e5ae721e5490\" id:\"6d10018f96b75826e628e1bdd73222c16ff2e8f746824af7408c4e9862f3deda\" pid:4840 exit_status:1 exited_at:{seconds:1750482334 nanos:159601677}" Jun 21 05:05:34.247891 containerd[1565]: time="2025-06-21T05:05:34.247827422Z" level=info msg="TaskExit event in podsandbox handler container_id:\"79288db2e582815ada5e65364c8e172776a392db1ff1e5de0f53e5ae721e5490\" id:\"3423c022ccd30cd6a87f40cd4589575fd2f32698b46e7fafb73cfb6b095d70f3\" pid:4866 exit_status:1 exited_at:{seconds:1750482334 nanos:247516026}" Jun 21 05:05:34.399989 systemd[1]: Started sshd@8-10.0.0.72:22-10.0.0.1:41366.service - OpenSSH per-connection server daemon (10.0.0.1:41366). Jun 21 05:05:34.404266 kubelet[2681]: I0621 05:05:34.404203 2681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 21 05:05:34.404540 kubelet[2681]: E0621 05:05:34.404273 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 05:05:34.520865 sshd[4880]: Accepted publickey for core from 10.0.0.1 port 41366 ssh2: RSA SHA256:UcUMoAuz6+rdewXVNINfGwLYEuDJpooqWrO3V6JQU60 Jun 21 05:05:34.523156 sshd-session[4880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:05:34.529165 systemd-logind[1550]: New session 9 of user core. Jun 21 05:05:34.541783 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 21 05:05:34.698228 sshd[4890]: Connection closed by 10.0.0.1 port 41366 Jun 21 05:05:34.698737 sshd-session[4880]: pam_unix(sshd:session): session closed for user core Jun 21 05:05:34.704800 systemd[1]: sshd@8-10.0.0.72:22-10.0.0.1:41366.service: Deactivated successfully. Jun 21 05:05:34.707459 systemd[1]: session-9.scope: Deactivated successfully. Jun 21 05:05:34.708916 systemd-logind[1550]: Session 9 logged out. Waiting for processes to exit. Jun 21 05:05:34.710755 systemd-logind[1550]: Removed session 9. Jun 21 05:05:35.384671 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount774061330.mount: Deactivated successfully. Jun 21 05:05:36.255354 containerd[1565]: time="2025-06-21T05:05:36.255283899Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:05:36.256290 containerd[1565]: time="2025-06-21T05:05:36.256239566Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.1: active requests=0, bytes read=66352249" Jun 21 05:05:36.257750 containerd[1565]: time="2025-06-21T05:05:36.257719619Z" level=info msg="ImageCreate event name:\"sha256:7ded2fef2b18e2077114599de13fa300df0e1437753deab5c59843a86d2dad82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:05:36.260322 containerd[1565]: time="2025-06-21T05:05:36.260267690Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:173a10ef7a65a843f99fc366c7c860fa4068a8f52fda1b30ee589bc4ca43f45a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:05:36.260853 containerd[1565]: time="2025-06-21T05:05:36.260822463Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.1\" with image id \"sha256:7ded2fef2b18e2077114599de13fa300df0e1437753deab5c59843a86d2dad82\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:173a10ef7a65a843f99fc366c7c860fa4068a8f52fda1b30ee589bc4ca43f45a\", size \"66352095\" in 3.405895709s" Jun 21 05:05:36.260914 containerd[1565]: time="2025-06-21T05:05:36.260854453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.1\" returns image reference \"sha256:7ded2fef2b18e2077114599de13fa300df0e1437753deab5c59843a86d2dad82\"" Jun 21 05:05:36.262164 containerd[1565]: time="2025-06-21T05:05:36.262113982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\"" Jun 21 05:05:36.263176 containerd[1565]: time="2025-06-21T05:05:36.263144399Z" level=info msg="CreateContainer within sandbox \"d931ce645cd8ad4f3df01fd5a55091e2cac65b2c84413c322a92791b596c03a6\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jun 21 05:05:36.270925 containerd[1565]: time="2025-06-21T05:05:36.270883590Z" level=info msg="Container 3beb22f8239ec007d1c466e6dc678839ac11280416d3ea4058be057dc700552c: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:05:36.279840 containerd[1565]: time="2025-06-21T05:05:36.279785267Z" level=info msg="CreateContainer within sandbox \"d931ce645cd8ad4f3df01fd5a55091e2cac65b2c84413c322a92791b596c03a6\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"3beb22f8239ec007d1c466e6dc678839ac11280416d3ea4058be057dc700552c\"" Jun 21 05:05:36.280686 containerd[1565]: time="2025-06-21T05:05:36.280616100Z" level=info msg="StartContainer for \"3beb22f8239ec007d1c466e6dc678839ac11280416d3ea4058be057dc700552c\"" Jun 21 05:05:36.281917 containerd[1565]: time="2025-06-21T05:05:36.281866461Z" level=info msg="connecting to shim 3beb22f8239ec007d1c466e6dc678839ac11280416d3ea4058be057dc700552c" address="unix:///run/containerd/s/4e4f21b8d6cd611c8e5361f61846cc77be4003e513200154ae2a945ead21d29b" protocol=ttrpc version=3 Jun 21 05:05:36.337758 systemd[1]: Started cri-containerd-3beb22f8239ec007d1c466e6dc678839ac11280416d3ea4058be057dc700552c.scope - libcontainer container 3beb22f8239ec007d1c466e6dc678839ac11280416d3ea4058be057dc700552c. Jun 21 05:05:36.390168 containerd[1565]: time="2025-06-21T05:05:36.390125484Z" level=info msg="StartContainer for \"3beb22f8239ec007d1c466e6dc678839ac11280416d3ea4058be057dc700552c\" returns successfully" Jun 21 05:05:36.423936 kubelet[2681]: I0621 05:05:36.423102 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-b855447fc-sd4n7" podStartSLOduration=28.18930713 podStartE2EDuration="33.423083101s" podCreationTimestamp="2025-06-21 05:05:03 +0000 UTC" firstStartedPulling="2025-06-21 05:05:27.620837894 +0000 UTC m=+36.664399499" lastFinishedPulling="2025-06-21 05:05:32.854613855 +0000 UTC m=+41.898175470" observedRunningTime="2025-06-21 05:05:33.421143647 +0000 UTC m=+42.464705252" watchObservedRunningTime="2025-06-21 05:05:36.423083101 +0000 UTC m=+45.466644706" Jun 21 05:05:37.051339 kubelet[2681]: I0621 05:05:37.051292 2681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 21 05:05:37.051732 kubelet[2681]: E0621 05:05:37.051694 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 05:05:37.061525 kubelet[2681]: I0621 05:05:37.061431 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5bd85449d4-z8jd5" podStartSLOduration=23.506087206 podStartE2EDuration="32.061408447s" podCreationTimestamp="2025-06-21 05:05:05 +0000 UTC" firstStartedPulling="2025-06-21 05:05:27.706506592 +0000 UTC m=+36.750068187" lastFinishedPulling="2025-06-21 05:05:36.261827823 +0000 UTC m=+45.305389428" observedRunningTime="2025-06-21 05:05:36.424124038 +0000 UTC m=+45.467685643" watchObservedRunningTime="2025-06-21 05:05:37.061408447 +0000 UTC m=+46.104970052" Jun 21 05:05:37.411682 kubelet[2681]: I0621 05:05:37.411548 2681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 21 05:05:37.412268 kubelet[2681]: E0621 05:05:37.412230 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 05:05:37.550842 systemd-networkd[1461]: vxlan.calico: Link UP Jun 21 05:05:37.550853 systemd-networkd[1461]: vxlan.calico: Gained carrier Jun 21 05:05:38.912708 systemd-networkd[1461]: vxlan.calico: Gained IPv6LL Jun 21 05:05:38.968984 containerd[1565]: time="2025-06-21T05:05:38.968928278Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:05:38.969885 containerd[1565]: time="2025-06-21T05:05:38.969835713Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.1: active requests=0, bytes read=51246233" Jun 21 05:05:38.971120 containerd[1565]: time="2025-06-21T05:05:38.971061668Z" level=info msg="ImageCreate event name:\"sha256:6df5d7da55b19142ea456ddaa7f49909709419c92a39991e84b0f6708f953d73\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:05:38.973130 containerd[1565]: time="2025-06-21T05:05:38.973091314Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5a988b0c09389a083a7f37e3f14e361659f0bcf538c01d50e9f785671a7d9b20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:05:38.973738 containerd[1565]: time="2025-06-21T05:05:38.973701791Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" with image id \"sha256:6df5d7da55b19142ea456ddaa7f49909709419c92a39991e84b0f6708f953d73\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5a988b0c09389a083a7f37e3f14e361659f0bcf538c01d50e9f785671a7d9b20\", size \"52738904\" in 2.711549497s" Jun 21 05:05:38.973792 containerd[1565]: time="2025-06-21T05:05:38.973737328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" returns image reference \"sha256:6df5d7da55b19142ea456ddaa7f49909709419c92a39991e84b0f6708f953d73\"" Jun 21 05:05:38.979237 containerd[1565]: time="2025-06-21T05:05:38.978937613Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.1\"" Jun 21 05:05:38.997127 containerd[1565]: time="2025-06-21T05:05:38.997090987Z" level=info msg="CreateContainer within sandbox \"3fbbc95b1a1729f807274c393b25bb663d31ec25aaf38ba7c7008c068df2f604\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 21 05:05:39.005627 containerd[1565]: time="2025-06-21T05:05:39.005588208Z" level=info msg="Container 8205064321c49237caadd155ce323cb246e2054e2dd1f564d0c9871f6657dcac: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:05:39.013044 containerd[1565]: time="2025-06-21T05:05:39.013008274Z" level=info msg="CreateContainer within sandbox \"3fbbc95b1a1729f807274c393b25bb663d31ec25aaf38ba7c7008c068df2f604\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"8205064321c49237caadd155ce323cb246e2054e2dd1f564d0c9871f6657dcac\"" Jun 21 05:05:39.013536 containerd[1565]: time="2025-06-21T05:05:39.013495380Z" level=info msg="StartContainer for \"8205064321c49237caadd155ce323cb246e2054e2dd1f564d0c9871f6657dcac\"" Jun 21 05:05:39.014597 containerd[1565]: time="2025-06-21T05:05:39.014568106Z" level=info msg="connecting to shim 8205064321c49237caadd155ce323cb246e2054e2dd1f564d0c9871f6657dcac" address="unix:///run/containerd/s/82f15d3bae79994b2bd8f74221e53d605498907637f4b53807466c8fba00a2b8" protocol=ttrpc version=3 Jun 21 05:05:39.037662 systemd[1]: Started cri-containerd-8205064321c49237caadd155ce323cb246e2054e2dd1f564d0c9871f6657dcac.scope - libcontainer container 8205064321c49237caadd155ce323cb246e2054e2dd1f564d0c9871f6657dcac. Jun 21 05:05:39.094040 containerd[1565]: time="2025-06-21T05:05:39.093976814Z" level=info msg="StartContainer for \"8205064321c49237caadd155ce323cb246e2054e2dd1f564d0c9871f6657dcac\" returns successfully" Jun 21 05:05:39.435172 kubelet[2681]: I0621 05:05:39.435049 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7b69c775df-jfqx8" podStartSLOduration=23.211853285 podStartE2EDuration="33.435026087s" podCreationTimestamp="2025-06-21 05:05:06 +0000 UTC" firstStartedPulling="2025-06-21 05:05:28.755612074 +0000 UTC m=+37.799173679" lastFinishedPulling="2025-06-21 05:05:38.978784866 +0000 UTC m=+48.022346481" observedRunningTime="2025-06-21 05:05:39.434344927 +0000 UTC m=+48.477906532" watchObservedRunningTime="2025-06-21 05:05:39.435026087 +0000 UTC m=+48.478587692" Jun 21 05:05:39.715027 systemd[1]: Started sshd@9-10.0.0.72:22-10.0.0.1:42996.service - OpenSSH per-connection server daemon (10.0.0.1:42996). Jun 21 05:05:39.784357 sshd[5215]: Accepted publickey for core from 10.0.0.1 port 42996 ssh2: RSA SHA256:UcUMoAuz6+rdewXVNINfGwLYEuDJpooqWrO3V6JQU60 Jun 21 05:05:39.786186 sshd-session[5215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:05:39.790697 systemd-logind[1550]: New session 10 of user core. Jun 21 05:05:39.803660 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 21 05:05:39.989576 sshd[5217]: Connection closed by 10.0.0.1 port 42996 Jun 21 05:05:39.989840 sshd-session[5215]: pam_unix(sshd:session): session closed for user core Jun 21 05:05:39.994691 systemd[1]: sshd@9-10.0.0.72:22-10.0.0.1:42996.service: Deactivated successfully. Jun 21 05:05:39.996971 systemd[1]: session-10.scope: Deactivated successfully. Jun 21 05:05:39.997952 systemd-logind[1550]: Session 10 logged out. Waiting for processes to exit. Jun 21 05:05:39.999371 systemd-logind[1550]: Removed session 10. Jun 21 05:05:40.426568 kubelet[2681]: I0621 05:05:40.426438 2681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 21 05:05:40.575643 kubelet[2681]: I0621 05:05:40.575596 2681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 21 05:05:40.736861 containerd[1565]: time="2025-06-21T05:05:40.736630156Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3beb22f8239ec007d1c466e6dc678839ac11280416d3ea4058be057dc700552c\" id:\"db0386f59e2d0858089d054914bd4d630ff0ff72790966a5080c96cb5380da4c\" pid:5244 exit_status:1 exited_at:{seconds:1750482340 nanos:735538834}" Jun 21 05:05:40.776992 containerd[1565]: time="2025-06-21T05:05:40.776915653Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:05:40.793599 containerd[1565]: time="2025-06-21T05:05:40.777669200Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.1: active requests=0, bytes read=8758389" Jun 21 05:05:40.793703 containerd[1565]: time="2025-06-21T05:05:40.778744069Z" level=info msg="ImageCreate event name:\"sha256:8a733c30ec1a8c9f3f51e2da387b425052ed4a9ca631da57c6b185183243e8e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:05:40.793826 containerd[1565]: time="2025-06-21T05:05:40.781992865Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.1\" with image id \"sha256:8a733c30ec1a8c9f3f51e2da387b425052ed4a9ca631da57c6b185183243e8e9\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:b2a5699992dd6c84cfab94ef60536b9aaf19ad8de648e8e0b92d3733f5f52d23\", size \"10251092\" in 1.803022501s" Jun 21 05:05:40.793897 containerd[1565]: time="2025-06-21T05:05:40.793885044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.1\" returns image reference \"sha256:8a733c30ec1a8c9f3f51e2da387b425052ed4a9ca631da57c6b185183243e8e9\"" Jun 21 05:05:40.795151 containerd[1565]: time="2025-06-21T05:05:40.795128320Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:b2a5699992dd6c84cfab94ef60536b9aaf19ad8de648e8e0b92d3733f5f52d23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:05:40.795863 containerd[1565]: time="2025-06-21T05:05:40.795775867Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\"" Jun 21 05:05:40.797854 containerd[1565]: time="2025-06-21T05:05:40.797644909Z" level=info msg="CreateContainer within sandbox \"e7347ce46b5aa6917353c8c594cda99f638f025dba898ae0350112807975bc84\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 21 05:05:40.815510 containerd[1565]: time="2025-06-21T05:05:40.815164324Z" level=info msg="Container 6c45a1a43421037b77d49b4e1dee7c72f1195c45f3a39aaa8b6ecef4490eed78: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:05:40.826460 containerd[1565]: time="2025-06-21T05:05:40.826415358Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3beb22f8239ec007d1c466e6dc678839ac11280416d3ea4058be057dc700552c\" id:\"f742568dc4491a189bb45b33b942f11da5d41ba6b126be41ff6e97229b37c5d0\" pid:5273 exit_status:1 exited_at:{seconds:1750482340 nanos:826053067}" Jun 21 05:05:40.833475 containerd[1565]: time="2025-06-21T05:05:40.833440579Z" level=info msg="CreateContainer within sandbox \"e7347ce46b5aa6917353c8c594cda99f638f025dba898ae0350112807975bc84\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"6c45a1a43421037b77d49b4e1dee7c72f1195c45f3a39aaa8b6ecef4490eed78\"" Jun 21 05:05:40.833965 containerd[1565]: time="2025-06-21T05:05:40.833912657Z" level=info msg="StartContainer for \"6c45a1a43421037b77d49b4e1dee7c72f1195c45f3a39aaa8b6ecef4490eed78\"" Jun 21 05:05:40.835235 containerd[1565]: time="2025-06-21T05:05:40.835206819Z" level=info msg="connecting to shim 6c45a1a43421037b77d49b4e1dee7c72f1195c45f3a39aaa8b6ecef4490eed78" address="unix:///run/containerd/s/fa8809443b48f34c27ffd9ed5f6d27b9fb2307307bf73e3c6397e4719a87162a" protocol=ttrpc version=3 Jun 21 05:05:40.861647 systemd[1]: Started cri-containerd-6c45a1a43421037b77d49b4e1dee7c72f1195c45f3a39aaa8b6ecef4490eed78.scope - libcontainer container 6c45a1a43421037b77d49b4e1dee7c72f1195c45f3a39aaa8b6ecef4490eed78. Jun 21 05:05:40.978203 containerd[1565]: time="2025-06-21T05:05:40.978151554Z" level=info msg="StartContainer for \"6c45a1a43421037b77d49b4e1dee7c72f1195c45f3a39aaa8b6ecef4490eed78\" returns successfully" Jun 21 05:05:43.107047 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3569232904.mount: Deactivated successfully. Jun 21 05:05:43.906946 containerd[1565]: time="2025-06-21T05:05:43.906882916Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:05:43.908360 containerd[1565]: time="2025-06-21T05:05:43.908330997Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.1: active requests=0, bytes read=33086345" Jun 21 05:05:43.909560 containerd[1565]: time="2025-06-21T05:05:43.909528496Z" level=info msg="ImageCreate event name:\"sha256:a8d73c8fd22b3a7a28e9baab63169fb459bc504d71d871f96225c4f2d5e660a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:05:43.912448 containerd[1565]: time="2025-06-21T05:05:43.912408638Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:4b8bcb8b4fc05026ba811bf0b25b736086c1b8b26a83a9039a84dd3a06b06bd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:05:43.913180 containerd[1565]: time="2025-06-21T05:05:43.913135183Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" with image id \"sha256:a8d73c8fd22b3a7a28e9baab63169fb459bc504d71d871f96225c4f2d5e660a5\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:4b8bcb8b4fc05026ba811bf0b25b736086c1b8b26a83a9039a84dd3a06b06bd4\", size \"33086175\" in 3.11731906s" Jun 21 05:05:43.913180 containerd[1565]: time="2025-06-21T05:05:43.913168576Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" returns image reference \"sha256:a8d73c8fd22b3a7a28e9baab63169fb459bc504d71d871f96225c4f2d5e660a5\"" Jun 21 05:05:43.914032 containerd[1565]: time="2025-06-21T05:05:43.913996531Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\"" Jun 21 05:05:43.915873 containerd[1565]: time="2025-06-21T05:05:43.915845635Z" level=info msg="CreateContainer within sandbox \"0cff4da261e4e7c833fa2e17cfede18780f471878478f524dd204849138813e8\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jun 21 05:05:43.929080 containerd[1565]: time="2025-06-21T05:05:43.928399050Z" level=info msg="Container 10303efc72123510b3c963a75280d5bd182aa7779e637a573788695b7938b309: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:05:43.936981 containerd[1565]: time="2025-06-21T05:05:43.936902675Z" level=info msg="CreateContainer within sandbox \"0cff4da261e4e7c833fa2e17cfede18780f471878478f524dd204849138813e8\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"10303efc72123510b3c963a75280d5bd182aa7779e637a573788695b7938b309\"" Jun 21 05:05:43.937613 containerd[1565]: time="2025-06-21T05:05:43.937568867Z" level=info msg="StartContainer for \"10303efc72123510b3c963a75280d5bd182aa7779e637a573788695b7938b309\"" Jun 21 05:05:43.939278 containerd[1565]: time="2025-06-21T05:05:43.939027578Z" level=info msg="connecting to shim 10303efc72123510b3c963a75280d5bd182aa7779e637a573788695b7938b309" address="unix:///run/containerd/s/1d0fb60c50caeaa313770ba0343b948e5fbe9ea10c547ae26617faa85608980b" protocol=ttrpc version=3 Jun 21 05:05:43.961643 systemd[1]: Started cri-containerd-10303efc72123510b3c963a75280d5bd182aa7779e637a573788695b7938b309.scope - libcontainer container 10303efc72123510b3c963a75280d5bd182aa7779e637a573788695b7938b309. Jun 21 05:05:44.013235 containerd[1565]: time="2025-06-21T05:05:44.012941629Z" level=info msg="StartContainer for \"10303efc72123510b3c963a75280d5bd182aa7779e637a573788695b7938b309\" returns successfully" Jun 21 05:05:44.305776 containerd[1565]: time="2025-06-21T05:05:44.305707506Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:05:44.306556 containerd[1565]: time="2025-06-21T05:05:44.306505475Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.1: active requests=0, bytes read=77" Jun 21 05:05:44.308366 containerd[1565]: time="2025-06-21T05:05:44.308318401Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" with image id \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\", size \"48798372\" in 394.285451ms" Jun 21 05:05:44.308366 containerd[1565]: time="2025-06-21T05:05:44.308359698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" returns image reference \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\"" Jun 21 05:05:44.309251 containerd[1565]: time="2025-06-21T05:05:44.309214113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\"" Jun 21 05:05:44.311065 containerd[1565]: time="2025-06-21T05:05:44.311017682Z" level=info msg="CreateContainer within sandbox \"064c7aaf8876de14622204d015d5edbc6ffb7d240227e315930f1a2ba3cdc447\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 21 05:05:44.328920 containerd[1565]: time="2025-06-21T05:05:44.328868385Z" level=info msg="Container 76424d7c52418213761736f6601450be0501344e5d4ab388f893cb9fd3645882: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:05:44.338365 containerd[1565]: time="2025-06-21T05:05:44.338316935Z" level=info msg="CreateContainer within sandbox \"064c7aaf8876de14622204d015d5edbc6ffb7d240227e315930f1a2ba3cdc447\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"76424d7c52418213761736f6601450be0501344e5d4ab388f893cb9fd3645882\"" Jun 21 05:05:44.338854 containerd[1565]: time="2025-06-21T05:05:44.338796706Z" level=info msg="StartContainer for \"76424d7c52418213761736f6601450be0501344e5d4ab388f893cb9fd3645882\"" Jun 21 05:05:44.340088 containerd[1565]: time="2025-06-21T05:05:44.340053778Z" level=info msg="connecting to shim 76424d7c52418213761736f6601450be0501344e5d4ab388f893cb9fd3645882" address="unix:///run/containerd/s/1a7bfd27f7f43688418a068983a0a0c4490875de5b86bd878cb9dbe867f6a99c" protocol=ttrpc version=3 Jun 21 05:05:44.368745 systemd[1]: Started cri-containerd-76424d7c52418213761736f6601450be0501344e5d4ab388f893cb9fd3645882.scope - libcontainer container 76424d7c52418213761736f6601450be0501344e5d4ab388f893cb9fd3645882. Jun 21 05:05:44.429320 containerd[1565]: time="2025-06-21T05:05:44.429279503Z" level=info msg="StartContainer for \"76424d7c52418213761736f6601450be0501344e5d4ab388f893cb9fd3645882\" returns successfully" Jun 21 05:05:44.452191 kubelet[2681]: I0621 05:05:44.452102 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-b855447fc-gjvrv" podStartSLOduration=27.739135141 podStartE2EDuration="41.452085115s" podCreationTimestamp="2025-06-21 05:05:03 +0000 UTC" firstStartedPulling="2025-06-21 05:05:30.596106113 +0000 UTC m=+39.639667718" lastFinishedPulling="2025-06-21 05:05:44.309056087 +0000 UTC m=+53.352617692" observedRunningTime="2025-06-21 05:05:44.45152389 +0000 UTC m=+53.495085485" watchObservedRunningTime="2025-06-21 05:05:44.452085115 +0000 UTC m=+53.495646720" Jun 21 05:05:44.483271 kubelet[2681]: I0621 05:05:44.483222 2681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 21 05:05:44.534983 containerd[1565]: time="2025-06-21T05:05:44.534706513Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8205064321c49237caadd155ce323cb246e2054e2dd1f564d0c9871f6657dcac\" id:\"cfa1cadaf7bbee298e5431c497688522c034ae2be89f3c62fd97627dcb6bb31f\" pid:5414 exited_at:{seconds:1750482344 nanos:534148014}" Jun 21 05:05:44.559554 kubelet[2681]: I0621 05:05:44.558969 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-76447d75c6-87bqz" podStartSLOduration=2.150830207 podStartE2EDuration="18.558947301s" podCreationTimestamp="2025-06-21 05:05:26 +0000 UTC" firstStartedPulling="2025-06-21 05:05:27.505711903 +0000 UTC m=+36.549273508" lastFinishedPulling="2025-06-21 05:05:43.913828986 +0000 UTC m=+52.957390602" observedRunningTime="2025-06-21 05:05:44.466165466 +0000 UTC m=+53.509727081" watchObservedRunningTime="2025-06-21 05:05:44.558947301 +0000 UTC m=+53.602508897" Jun 21 05:05:44.608935 containerd[1565]: time="2025-06-21T05:05:44.608891835Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8205064321c49237caadd155ce323cb246e2054e2dd1f564d0c9871f6657dcac\" id:\"b89b60e623a53b082e2adae790eff0f715d9c92a64f98068f4982a614c8a4f89\" pid:5444 exited_at:{seconds:1750482344 nanos:608339376}" Jun 21 05:05:45.006596 systemd[1]: Started sshd@10-10.0.0.72:22-10.0.0.1:43008.service - OpenSSH per-connection server daemon (10.0.0.1:43008). Jun 21 05:05:45.086575 sshd[5455]: Accepted publickey for core from 10.0.0.1 port 43008 ssh2: RSA SHA256:UcUMoAuz6+rdewXVNINfGwLYEuDJpooqWrO3V6JQU60 Jun 21 05:05:45.088658 sshd-session[5455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:05:45.094095 systemd-logind[1550]: New session 11 of user core. Jun 21 05:05:45.105714 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 21 05:05:45.268855 sshd[5458]: Connection closed by 10.0.0.1 port 43008 Jun 21 05:05:45.269384 sshd-session[5455]: pam_unix(sshd:session): session closed for user core Jun 21 05:05:45.280687 systemd[1]: sshd@10-10.0.0.72:22-10.0.0.1:43008.service: Deactivated successfully. Jun 21 05:05:45.282832 systemd[1]: session-11.scope: Deactivated successfully. Jun 21 05:05:45.284660 systemd-logind[1550]: Session 11 logged out. Waiting for processes to exit. Jun 21 05:05:45.289481 systemd[1]: Started sshd@11-10.0.0.72:22-10.0.0.1:43022.service - OpenSSH per-connection server daemon (10.0.0.1:43022). Jun 21 05:05:45.292985 systemd-logind[1550]: Removed session 11. Jun 21 05:05:45.342311 sshd[5472]: Accepted publickey for core from 10.0.0.1 port 43022 ssh2: RSA SHA256:UcUMoAuz6+rdewXVNINfGwLYEuDJpooqWrO3V6JQU60 Jun 21 05:05:45.344164 sshd-session[5472]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:05:45.349332 systemd-logind[1550]: New session 12 of user core. Jun 21 05:05:45.356675 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 21 05:05:45.629425 sshd[5474]: Connection closed by 10.0.0.1 port 43022 Jun 21 05:05:45.630719 sshd-session[5472]: pam_unix(sshd:session): session closed for user core Jun 21 05:05:45.639865 systemd[1]: sshd@11-10.0.0.72:22-10.0.0.1:43022.service: Deactivated successfully. Jun 21 05:05:45.641907 systemd[1]: session-12.scope: Deactivated successfully. Jun 21 05:05:45.643310 systemd-logind[1550]: Session 12 logged out. Waiting for processes to exit. Jun 21 05:05:45.645903 systemd[1]: Started sshd@12-10.0.0.72:22-10.0.0.1:43034.service - OpenSSH per-connection server daemon (10.0.0.1:43034). Jun 21 05:05:45.647458 systemd-logind[1550]: Removed session 12. Jun 21 05:05:45.707038 sshd[5486]: Accepted publickey for core from 10.0.0.1 port 43034 ssh2: RSA SHA256:UcUMoAuz6+rdewXVNINfGwLYEuDJpooqWrO3V6JQU60 Jun 21 05:05:45.708735 sshd-session[5486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:05:45.714023 systemd-logind[1550]: New session 13 of user core. Jun 21 05:05:45.731650 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 21 05:05:45.852206 sshd[5488]: Connection closed by 10.0.0.1 port 43034 Jun 21 05:05:45.852949 sshd-session[5486]: pam_unix(sshd:session): session closed for user core Jun 21 05:05:45.857874 systemd[1]: sshd@12-10.0.0.72:22-10.0.0.1:43034.service: Deactivated successfully. Jun 21 05:05:45.860460 systemd[1]: session-13.scope: Deactivated successfully. Jun 21 05:05:45.861464 systemd-logind[1550]: Session 13 logged out. Waiting for processes to exit. Jun 21 05:05:45.863407 systemd-logind[1550]: Removed session 13. Jun 21 05:05:48.460305 containerd[1565]: time="2025-06-21T05:05:48.460239643Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:05:48.494958 containerd[1565]: time="2025-06-21T05:05:48.494920875Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1: active requests=0, bytes read=14705633" Jun 21 05:05:48.512759 containerd[1565]: time="2025-06-21T05:05:48.512725652Z" level=info msg="ImageCreate event name:\"sha256:dfc00385e8755bddd1053a2a482a3559ad6c93bd8b882371b9ed8b5c3dfe22b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:05:48.537973 containerd[1565]: time="2025-06-21T05:05:48.537946857Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:1a882b6866dd22d783a39f1e041b87a154666ea4dd8b669fe98d0b0fac58d225\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 05:05:48.538581 containerd[1565]: time="2025-06-21T05:05:48.538539210Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" with image id \"sha256:dfc00385e8755bddd1053a2a482a3559ad6c93bd8b882371b9ed8b5c3dfe22b5\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:1a882b6866dd22d783a39f1e041b87a154666ea4dd8b669fe98d0b0fac58d225\", size \"16198288\" in 4.229291874s" Jun 21 05:05:48.538630 containerd[1565]: time="2025-06-21T05:05:48.538584104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" returns image reference \"sha256:dfc00385e8755bddd1053a2a482a3559ad6c93bd8b882371b9ed8b5c3dfe22b5\"" Jun 21 05:05:48.540341 containerd[1565]: time="2025-06-21T05:05:48.540312730Z" level=info msg="CreateContainer within sandbox \"e7347ce46b5aa6917353c8c594cda99f638f025dba898ae0350112807975bc84\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 21 05:05:48.738419 containerd[1565]: time="2025-06-21T05:05:48.738262428Z" level=info msg="Container c4b2cb3dade23f3123c74fabda0556420e5580f5aa6ae8c5c0cba1308b95eea5: CDI devices from CRI Config.CDIDevices: []" Jun 21 05:05:48.906364 containerd[1565]: time="2025-06-21T05:05:48.906301413Z" level=info msg="CreateContainer within sandbox \"e7347ce46b5aa6917353c8c594cda99f638f025dba898ae0350112807975bc84\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c4b2cb3dade23f3123c74fabda0556420e5580f5aa6ae8c5c0cba1308b95eea5\"" Jun 21 05:05:48.907188 containerd[1565]: time="2025-06-21T05:05:48.906879679Z" level=info msg="StartContainer for \"c4b2cb3dade23f3123c74fabda0556420e5580f5aa6ae8c5c0cba1308b95eea5\"" Jun 21 05:05:48.908554 containerd[1565]: time="2025-06-21T05:05:48.908527814Z" level=info msg="connecting to shim c4b2cb3dade23f3123c74fabda0556420e5580f5aa6ae8c5c0cba1308b95eea5" address="unix:///run/containerd/s/fa8809443b48f34c27ffd9ed5f6d27b9fb2307307bf73e3c6397e4719a87162a" protocol=ttrpc version=3 Jun 21 05:05:48.937673 systemd[1]: Started cri-containerd-c4b2cb3dade23f3123c74fabda0556420e5580f5aa6ae8c5c0cba1308b95eea5.scope - libcontainer container c4b2cb3dade23f3123c74fabda0556420e5580f5aa6ae8c5c0cba1308b95eea5. Jun 21 05:05:49.026797 containerd[1565]: time="2025-06-21T05:05:49.026688185Z" level=info msg="StartContainer for \"c4b2cb3dade23f3123c74fabda0556420e5580f5aa6ae8c5c0cba1308b95eea5\" returns successfully" Jun 21 05:05:49.378048 kubelet[2681]: I0621 05:05:49.377914 2681 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 21 05:05:49.378048 kubelet[2681]: I0621 05:05:49.377976 2681 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 21 05:05:49.588365 kubelet[2681]: I0621 05:05:49.588230 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-gnpdd" podStartSLOduration=24.503158746 podStartE2EDuration="43.588204802s" podCreationTimestamp="2025-06-21 05:05:06 +0000 UTC" firstStartedPulling="2025-06-21 05:05:29.454145078 +0000 UTC m=+38.497706683" lastFinishedPulling="2025-06-21 05:05:48.539191144 +0000 UTC m=+57.582752739" observedRunningTime="2025-06-21 05:05:49.587909116 +0000 UTC m=+58.631470721" watchObservedRunningTime="2025-06-21 05:05:49.588204802 +0000 UTC m=+58.631766407" Jun 21 05:05:50.867417 systemd[1]: Started sshd@13-10.0.0.72:22-10.0.0.1:58328.service - OpenSSH per-connection server daemon (10.0.0.1:58328). Jun 21 05:05:50.937904 sshd[5555]: Accepted publickey for core from 10.0.0.1 port 58328 ssh2: RSA SHA256:UcUMoAuz6+rdewXVNINfGwLYEuDJpooqWrO3V6JQU60 Jun 21 05:05:50.940028 sshd-session[5555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:05:50.944520 systemd-logind[1550]: New session 14 of user core. Jun 21 05:05:50.955634 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 21 05:05:51.105956 sshd[5557]: Connection closed by 10.0.0.1 port 58328 Jun 21 05:05:51.106343 sshd-session[5555]: pam_unix(sshd:session): session closed for user core Jun 21 05:05:51.114773 systemd[1]: sshd@13-10.0.0.72:22-10.0.0.1:58328.service: Deactivated successfully. Jun 21 05:05:51.116750 systemd[1]: session-14.scope: Deactivated successfully. Jun 21 05:05:51.117749 systemd-logind[1550]: Session 14 logged out. Waiting for processes to exit. Jun 21 05:05:51.119160 systemd-logind[1550]: Removed session 14. Jun 21 05:05:56.119279 systemd[1]: Started sshd@14-10.0.0.72:22-10.0.0.1:57300.service - OpenSSH per-connection server daemon (10.0.0.1:57300). Jun 21 05:05:56.180499 sshd[5574]: Accepted publickey for core from 10.0.0.1 port 57300 ssh2: RSA SHA256:UcUMoAuz6+rdewXVNINfGwLYEuDJpooqWrO3V6JQU60 Jun 21 05:05:56.181956 sshd-session[5574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:05:56.186342 systemd-logind[1550]: New session 15 of user core. Jun 21 05:05:56.197616 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 21 05:05:56.315954 sshd[5576]: Connection closed by 10.0.0.1 port 57300 Jun 21 05:05:56.316245 sshd-session[5574]: pam_unix(sshd:session): session closed for user core Jun 21 05:05:56.320579 systemd[1]: sshd@14-10.0.0.72:22-10.0.0.1:57300.service: Deactivated successfully. Jun 21 05:05:56.322962 systemd[1]: session-15.scope: Deactivated successfully. Jun 21 05:05:56.324647 systemd-logind[1550]: Session 15 logged out. Waiting for processes to exit. Jun 21 05:05:56.326213 systemd-logind[1550]: Removed session 15. Jun 21 05:05:57.967925 kubelet[2681]: I0621 05:05:57.967870 2681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 21 05:06:01.334743 systemd[1]: Started sshd@15-10.0.0.72:22-10.0.0.1:57316.service - OpenSSH per-connection server daemon (10.0.0.1:57316). Jun 21 05:06:01.399337 sshd[5604]: Accepted publickey for core from 10.0.0.1 port 57316 ssh2: RSA SHA256:UcUMoAuz6+rdewXVNINfGwLYEuDJpooqWrO3V6JQU60 Jun 21 05:06:01.401159 sshd-session[5604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:06:01.408175 systemd-logind[1550]: New session 16 of user core. Jun 21 05:06:01.412795 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 21 05:06:01.573350 sshd[5606]: Connection closed by 10.0.0.1 port 57316 Jun 21 05:06:01.573898 sshd-session[5604]: pam_unix(sshd:session): session closed for user core Jun 21 05:06:01.583734 systemd-logind[1550]: Session 16 logged out. Waiting for processes to exit. Jun 21 05:06:01.585834 systemd[1]: sshd@15-10.0.0.72:22-10.0.0.1:57316.service: Deactivated successfully. Jun 21 05:06:01.588476 systemd[1]: session-16.scope: Deactivated successfully. Jun 21 05:06:01.590360 systemd-logind[1550]: Removed session 16. Jun 21 05:06:04.033711 containerd[1565]: time="2025-06-21T05:06:04.033662907Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3beb22f8239ec007d1c466e6dc678839ac11280416d3ea4058be057dc700552c\" id:\"6ad441c2f17003126c65346cd6cc6e0c0b909f55073d4daf5b9ca19f0c0feac6\" pid:5631 exited_at:{seconds:1750482364 nanos:33139162}" Jun 21 05:06:04.244661 kubelet[2681]: E0621 05:06:04.244626 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 05:06:04.261925 containerd[1565]: time="2025-06-21T05:06:04.261870817Z" level=info msg="TaskExit event in podsandbox handler container_id:\"79288db2e582815ada5e65364c8e172776a392db1ff1e5de0f53e5ae721e5490\" id:\"2910045417ae9f53d9e19b0e73bbc9b149837fc1014fd496cc87f24783048874\" pid:5655 exited_at:{seconds:1750482364 nanos:261431776}" Jun 21 05:06:06.593836 systemd[1]: Started sshd@16-10.0.0.72:22-10.0.0.1:56022.service - OpenSSH per-connection server daemon (10.0.0.1:56022). Jun 21 05:06:06.663936 sshd[5669]: Accepted publickey for core from 10.0.0.1 port 56022 ssh2: RSA SHA256:UcUMoAuz6+rdewXVNINfGwLYEuDJpooqWrO3V6JQU60 Jun 21 05:06:06.667009 sshd-session[5669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:06:06.672380 systemd-logind[1550]: New session 17 of user core. Jun 21 05:06:06.681649 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 21 05:06:06.878676 sshd[5671]: Connection closed by 10.0.0.1 port 56022 Jun 21 05:06:06.879152 sshd-session[5669]: pam_unix(sshd:session): session closed for user core Jun 21 05:06:06.884248 systemd[1]: sshd@16-10.0.0.72:22-10.0.0.1:56022.service: Deactivated successfully. Jun 21 05:06:06.886409 systemd[1]: session-17.scope: Deactivated successfully. Jun 21 05:06:06.887279 systemd-logind[1550]: Session 17 logged out. Waiting for processes to exit. Jun 21 05:06:06.888586 systemd-logind[1550]: Removed session 17. Jun 21 05:06:10.833584 containerd[1565]: time="2025-06-21T05:06:10.833533662Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3beb22f8239ec007d1c466e6dc678839ac11280416d3ea4058be057dc700552c\" id:\"86c9888571d3cf081c973dd0d26ab1f2bc38e3473602a674b0a10fc2950abb7d\" pid:5698 exited_at:{seconds:1750482370 nanos:833074797}" Jun 21 05:06:11.891190 systemd[1]: Started sshd@17-10.0.0.72:22-10.0.0.1:56028.service - OpenSSH per-connection server daemon (10.0.0.1:56028). Jun 21 05:06:11.970675 sshd[5713]: Accepted publickey for core from 10.0.0.1 port 56028 ssh2: RSA SHA256:UcUMoAuz6+rdewXVNINfGwLYEuDJpooqWrO3V6JQU60 Jun 21 05:06:11.971371 sshd-session[5713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:06:11.978936 systemd-logind[1550]: New session 18 of user core. Jun 21 05:06:11.987631 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 21 05:06:12.130935 sshd[5715]: Connection closed by 10.0.0.1 port 56028 Jun 21 05:06:12.131393 sshd-session[5713]: pam_unix(sshd:session): session closed for user core Jun 21 05:06:12.144642 systemd[1]: sshd@17-10.0.0.72:22-10.0.0.1:56028.service: Deactivated successfully. Jun 21 05:06:12.146830 systemd[1]: session-18.scope: Deactivated successfully. Jun 21 05:06:12.147843 systemd-logind[1550]: Session 18 logged out. Waiting for processes to exit. Jun 21 05:06:12.151284 systemd[1]: Started sshd@18-10.0.0.72:22-10.0.0.1:56042.service - OpenSSH per-connection server daemon (10.0.0.1:56042). Jun 21 05:06:12.151965 systemd-logind[1550]: Removed session 18. Jun 21 05:06:12.204232 sshd[5729]: Accepted publickey for core from 10.0.0.1 port 56042 ssh2: RSA SHA256:UcUMoAuz6+rdewXVNINfGwLYEuDJpooqWrO3V6JQU60 Jun 21 05:06:12.205923 sshd-session[5729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:06:12.210616 systemd-logind[1550]: New session 19 of user core. Jun 21 05:06:12.219760 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 21 05:06:12.423096 sshd[5731]: Connection closed by 10.0.0.1 port 56042 Jun 21 05:06:12.424010 sshd-session[5729]: pam_unix(sshd:session): session closed for user core Jun 21 05:06:12.433687 systemd[1]: sshd@18-10.0.0.72:22-10.0.0.1:56042.service: Deactivated successfully. Jun 21 05:06:12.436374 systemd[1]: session-19.scope: Deactivated successfully. Jun 21 05:06:12.437357 systemd-logind[1550]: Session 19 logged out. Waiting for processes to exit. Jun 21 05:06:12.441077 systemd[1]: Started sshd@19-10.0.0.72:22-10.0.0.1:56046.service - OpenSSH per-connection server daemon (10.0.0.1:56046). Jun 21 05:06:12.442204 systemd-logind[1550]: Removed session 19. Jun 21 05:06:12.508304 sshd[5743]: Accepted publickey for core from 10.0.0.1 port 56046 ssh2: RSA SHA256:UcUMoAuz6+rdewXVNINfGwLYEuDJpooqWrO3V6JQU60 Jun 21 05:06:12.510195 sshd-session[5743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:06:12.515654 systemd-logind[1550]: New session 20 of user core. Jun 21 05:06:12.523637 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 21 05:06:14.315647 sshd[5745]: Connection closed by 10.0.0.1 port 56046 Jun 21 05:06:14.316021 sshd-session[5743]: pam_unix(sshd:session): session closed for user core Jun 21 05:06:14.329435 systemd[1]: sshd@19-10.0.0.72:22-10.0.0.1:56046.service: Deactivated successfully. Jun 21 05:06:14.331532 systemd[1]: session-20.scope: Deactivated successfully. Jun 21 05:06:14.332903 systemd-logind[1550]: Session 20 logged out. Waiting for processes to exit. Jun 21 05:06:14.336958 systemd[1]: Started sshd@20-10.0.0.72:22-10.0.0.1:56058.service - OpenSSH per-connection server daemon (10.0.0.1:56058). Jun 21 05:06:14.337693 systemd-logind[1550]: Removed session 20. Jun 21 05:06:14.395323 sshd[5765]: Accepted publickey for core from 10.0.0.1 port 56058 ssh2: RSA SHA256:UcUMoAuz6+rdewXVNINfGwLYEuDJpooqWrO3V6JQU60 Jun 21 05:06:14.396948 sshd-session[5765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:06:14.401674 systemd-logind[1550]: New session 21 of user core. Jun 21 05:06:14.420625 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 21 05:06:14.579389 containerd[1565]: time="2025-06-21T05:06:14.579240095Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8205064321c49237caadd155ce323cb246e2054e2dd1f564d0c9871f6657dcac\" id:\"fee677488c3fdd604092f0f7e6644e1d62f40acc2c29a3b99f024b9988c4e2a7\" pid:5786 exited_at:{seconds:1750482374 nanos:578715106}" Jun 21 05:06:15.245565 kubelet[2681]: E0621 05:06:15.245143 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 05:06:15.249941 sshd[5767]: Connection closed by 10.0.0.1 port 56058 Jun 21 05:06:15.251207 sshd-session[5765]: pam_unix(sshd:session): session closed for user core Jun 21 05:06:15.262459 systemd[1]: sshd@20-10.0.0.72:22-10.0.0.1:56058.service: Deactivated successfully. Jun 21 05:06:15.264614 systemd[1]: session-21.scope: Deactivated successfully. Jun 21 05:06:15.265436 systemd-logind[1550]: Session 21 logged out. Waiting for processes to exit. Jun 21 05:06:15.269810 systemd[1]: Started sshd@21-10.0.0.72:22-10.0.0.1:56060.service - OpenSSH per-connection server daemon (10.0.0.1:56060). Jun 21 05:06:15.271064 systemd-logind[1550]: Removed session 21. Jun 21 05:06:15.326277 sshd[5800]: Accepted publickey for core from 10.0.0.1 port 56060 ssh2: RSA SHA256:UcUMoAuz6+rdewXVNINfGwLYEuDJpooqWrO3V6JQU60 Jun 21 05:06:15.328346 sshd-session[5800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:06:15.337810 systemd-logind[1550]: New session 22 of user core. Jun 21 05:06:15.344898 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 21 05:06:15.494071 sshd[5802]: Connection closed by 10.0.0.1 port 56060 Jun 21 05:06:15.494426 sshd-session[5800]: pam_unix(sshd:session): session closed for user core Jun 21 05:06:15.499640 systemd[1]: sshd@21-10.0.0.72:22-10.0.0.1:56060.service: Deactivated successfully. Jun 21 05:06:15.503318 systemd[1]: session-22.scope: Deactivated successfully. Jun 21 05:06:15.504535 systemd-logind[1550]: Session 22 logged out. Waiting for processes to exit. Jun 21 05:06:15.507417 systemd-logind[1550]: Removed session 22. Jun 21 05:06:20.507715 systemd[1]: Started sshd@22-10.0.0.72:22-10.0.0.1:54784.service - OpenSSH per-connection server daemon (10.0.0.1:54784). Jun 21 05:06:21.264025 sshd[5825]: Accepted publickey for core from 10.0.0.1 port 54784 ssh2: RSA SHA256:UcUMoAuz6+rdewXVNINfGwLYEuDJpooqWrO3V6JQU60 Jun 21 05:06:21.265703 sshd-session[5825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:06:21.271866 systemd-logind[1550]: New session 23 of user core. Jun 21 05:06:21.277725 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 21 05:06:21.507002 sshd[5827]: Connection closed by 10.0.0.1 port 54784 Jun 21 05:06:21.507176 sshd-session[5825]: pam_unix(sshd:session): session closed for user core Jun 21 05:06:21.516992 systemd[1]: sshd@22-10.0.0.72:22-10.0.0.1:54784.service: Deactivated successfully. Jun 21 05:06:21.519964 systemd[1]: session-23.scope: Deactivated successfully. Jun 21 05:06:21.522447 systemd-logind[1550]: Session 23 logged out. Waiting for processes to exit. Jun 21 05:06:21.524180 systemd-logind[1550]: Removed session 23. Jun 21 05:06:22.244700 kubelet[2681]: E0621 05:06:22.244647 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 05:06:26.525643 systemd[1]: Started sshd@23-10.0.0.72:22-10.0.0.1:45640.service - OpenSSH per-connection server daemon (10.0.0.1:45640). Jun 21 05:06:26.596898 sshd[5842]: Accepted publickey for core from 10.0.0.1 port 45640 ssh2: RSA SHA256:UcUMoAuz6+rdewXVNINfGwLYEuDJpooqWrO3V6JQU60 Jun 21 05:06:26.599137 sshd-session[5842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:06:26.604088 systemd-logind[1550]: New session 24 of user core. Jun 21 05:06:26.611674 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 21 05:06:26.794350 sshd[5844]: Connection closed by 10.0.0.1 port 45640 Jun 21 05:06:26.794405 sshd-session[5842]: pam_unix(sshd:session): session closed for user core Jun 21 05:06:26.800630 systemd[1]: sshd@23-10.0.0.72:22-10.0.0.1:45640.service: Deactivated successfully. Jun 21 05:06:26.803153 systemd[1]: session-24.scope: Deactivated successfully. Jun 21 05:06:26.804967 systemd-logind[1550]: Session 24 logged out. Waiting for processes to exit. Jun 21 05:06:26.806187 systemd-logind[1550]: Removed session 24. Jun 21 05:06:29.244702 kubelet[2681]: E0621 05:06:29.244635 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 21 05:06:31.813245 systemd[1]: Started sshd@24-10.0.0.72:22-10.0.0.1:45644.service - OpenSSH per-connection server daemon (10.0.0.1:45644). Jun 21 05:06:31.884553 sshd[5859]: Accepted publickey for core from 10.0.0.1 port 45644 ssh2: RSA SHA256:UcUMoAuz6+rdewXVNINfGwLYEuDJpooqWrO3V6JQU60 Jun 21 05:06:31.886884 sshd-session[5859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 05:06:31.892918 systemd-logind[1550]: New session 25 of user core. Jun 21 05:06:31.897640 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 21 05:06:32.103011 sshd[5861]: Connection closed by 10.0.0.1 port 45644 Jun 21 05:06:32.103321 sshd-session[5859]: pam_unix(sshd:session): session closed for user core Jun 21 05:06:32.108146 systemd[1]: sshd@24-10.0.0.72:22-10.0.0.1:45644.service: Deactivated successfully. Jun 21 05:06:32.110715 systemd[1]: session-25.scope: Deactivated successfully. Jun 21 05:06:32.111795 systemd-logind[1550]: Session 25 logged out. Waiting for processes to exit. Jun 21 05:06:32.113759 systemd-logind[1550]: Removed session 25.